Tuesday, May 12, 2015
Thursday, April 30, 2015
Wednesday, April 29, 2015
PCB Failed, Demo of Bluetooth System.
Sadly, the PCB that we made has failed. Below we have the video of the circuit board version of the circuit. This version worked fairly well.
We then transferred this circuit to a pcb. The pcb, did not work at all as shown in the video below.
In better news we were able to put together an android app to control our circuit. In order to this we attached a Bluetooth module that converted Bluetooth to serial that our mbed could handle. The way it works is that when you start chewing the vibration motor on your body will start vibrating and will not stop until you open our app and log your meal. When you click the button to decide what meal you ate, we will send a post to ThingSpeak to log the time and what meal you are eating. We then open myfitness pal so that you can log exactly what you ate.
Since we had everything working after the 519 demos earlier today, we even had Deeksha and Sid try it out. I was able to get a video of Sid trying out our system.
Bluetooth Works
Today we got bluetooth integration with our android app. Building the android app was a pain but eventually we got our mbed and my android phone to communicate. When the user is chewing for too long, they will not be able to stop the vibration until they log their meal, and they then have the option to fill out their meal on myfitnesspal.
Sunday, April 26, 2015
First Demo
On our first demo day, we had a couple bugs for our presentation. We made our own EMG circuit, but sadly we were not able to demo it because we were missing some of the ICs that we ordered for it. Furthermore, our vibration motor died early on in our demos due to a loose connection somewhere in our perfboard. We now have the basics of our design completed. We have started to work on an android app for the next demo day. We are working on interfacing our circuit with an android phone.
Tuesday, April 21, 2015
Goals
Baseline Goal(4/24/2015):
As stated in our project proposal, our baseline goal is to detect when a person chewing and to give them feedback to inform them that they should watch what they are eating, and to do this by using an mbed and our own emg sensor. To meet this goal we must work on packaging our product into our fanny pack and integrating our emg own emg sensor which we will receive on Wednesday.
Reach Goal(5/1/2015):
We will integrate an android app that will allow the user to tag their eating habits. This means that they will not only get the feedback from the sensors on their body, but they will now be able to create timestamps of when they were eating in order to see how strictly one is adhering to their diet.
As stated in our project proposal, our baseline goal is to detect when a person chewing and to give them feedback to inform them that they should watch what they are eating, and to do this by using an mbed and our own emg sensor. To meet this goal we must work on packaging our product into our fanny pack and integrating our emg own emg sensor which we will receive on Wednesday.
Reach Goal(5/1/2015):
We will integrate an android app that will allow the user to tag their eating habits. This means that they will not only get the feedback from the sensors on their body, but they will now be able to create timestamps of when they were eating in order to see how strictly one is adhering to their diet.
Sunday, April 19, 2015
More difficutly in detecting swallowing
On Friday, we mainly looked at the materials that were sent to us on using a microphone and found that it was infeasible due to the ideal environment and expensive equipment required for the study that we looked at.
We then moved on and tried to detect swallowing using our EMG sensors. We tried two locations, below the chin as we had seen in another study and on the side of the face where we currently have our chewing EMG. For both locations we captured 1 minute of training data for both talking and swallowing. I drank about a little more than a gallon of water in the 2 minutes of capture for swallowing.
When we started looking at the chin electrode data things looked promising, because our training data was able to predict talking or swallowing with 85% accuracy. We soon learned that this was a fluke because our decision tree had been fit precisely to our training data and our testing data sets had an accuracy between 11% and 50%.
Detection of drinking on the side of the face was even worse our training set was only about 50% accurate.
After seeing these results we decided to search on line to see if anyone was able to detect swallowing using an EMG. It turned out again that this required an ideal situation. In this study, "oscilloscope traces were started at the examiner's order to swallow."
Basically what we have found is that the only way to detect swallowing is in an ideal environment where the user is only swallowing. The reasoning is that the muscles that one uses to swallow are used consistently when talking and doing other normal activities.
After 3 days of trying to get swallowing to work, we have decided to move on to packaging our product and making sure that the chewing mechanism works properly and that the user will get valuable feedback.
We then moved on and tried to detect swallowing using our EMG sensors. We tried two locations, below the chin as we had seen in another study and on the side of the face where we currently have our chewing EMG. For both locations we captured 1 minute of training data for both talking and swallowing. I drank about a little more than a gallon of water in the 2 minutes of capture for swallowing.
When we started looking at the chin electrode data things looked promising, because our training data was able to predict talking or swallowing with 85% accuracy. We soon learned that this was a fluke because our decision tree had been fit precisely to our training data and our testing data sets had an accuracy between 11% and 50%.
Detection of drinking on the side of the face was even worse our training set was only about 50% accurate.
After seeing these results we decided to search on line to see if anyone was able to detect swallowing using an EMG. It turned out again that this required an ideal situation. In this study, "oscilloscope traces were started at the examiner's order to swallow."
Basically what we have found is that the only way to detect swallowing is in an ideal environment where the user is only swallowing. The reasoning is that the muscles that one uses to swallow are used consistently when talking and doing other normal activities.
After 3 days of trying to get swallowing to work, we have decided to move on to packaging our product and making sure that the chewing mechanism works properly and that the user will get valuable feedback.
Friday, April 17, 2015
Difficulty in Detecting Swallowing
Today we have been trying to use different sensors to detect whether a person is swallowing. When we compare the data from swallowing to resting or talking, we can never see any significant difference.
Professor Mangharam sent us some studies a while ago where the researchers tried to classify swallowing. With significant time, they were able to do so. However, they did so solely by isolating swallowing; they did not consider when the user would be talking.
According to one study, "a limitation of the current study was the need to have a quiet environment. If recording included noise originating from talking, body movements, occasional intrinsic sounds (e.g., coughing), and background noise of different sources, a more complicated algorithm would be needed." Since our product is intended for the user to wear throughout the day, detecting swallowing through audio or electrical signals would not be feasible.
We will continue to test different avenues for detecting swallowing and report on their effectiveness.
Professor Mangharam sent us some studies a while ago where the researchers tried to classify swallowing. With significant time, they were able to do so. However, they did so solely by isolating swallowing; they did not consider when the user would be talking.
According to one study, "a limitation of the current study was the need to have a quiet environment. If recording included noise originating from talking, body movements, occasional intrinsic sounds (e.g., coughing), and background noise of different sources, a more complicated algorithm would be needed." Since our product is intended for the user to wear throughout the day, detecting swallowing through audio or electrical signals would not be feasible.
We will continue to test different avenues for detecting swallowing and report on their effectiveness.
Wednesday, April 15, 2015
EMG Circuit
Today I finished creating the hardware for the EMG circuit. I also sent in a PCB design that should arrive mid-next week. In the chance the PCB design does not work, this weekend, I will also prepare a circuit made on perf-board. I also began creating the actuation component of the project, in providing feedback to the user with vibration motors.
We decided to purchase a fanny pack to store all of the equipment, since we have three bulky batteries which would not be feasible to store in a necklace. Right now we need to figure out how well the user would be able to perceive any haptic feedback from vibration motors. Most likely, we place the vibration motors on the outside of the fanny pack but resting against the wearer's t-shirt.
We decided to purchase a fanny pack to store all of the equipment, since we have three bulky batteries which would not be feasible to store in a necklace. Right now we need to figure out how well the user would be able to perceive any haptic feedback from vibration motors. Most likely, we place the vibration motors on the outside of the fanny pack but resting against the wearer's t-shirt.
Saturday, April 11, 2015
MBED
Last night we were able to get the classification working, with some bugs, on the mbed. In order to do this we needed to translate all of our python functions in to C++ functions. This was a little more difficult that I had expected because I have little knowledge of C++ and the iterator type hung me up for a little while. In the end I was able to figure this out and test our tree on the mbed It was able to differentiate between talking and chewing, but for some reason the transition from chewing to talking took the mbed longer than expected. The transition from talking to chewing was almost immediate. I did not take any video of this part because the only response that we get right now is a blinking LED so it was not super exciting. Here is a link to the code that is currently on our MBED.
Thursday, April 9, 2015
Summary of Data Capture and Classification
Over the past few days I have worked on capturing data from our mbed and processing it through a python script and then using a tool in order to classify the data that we captured.
We started by using an accelerometer since we did not have our emg sensor yet.
Here is a matlab plot of the data we collected from the accelerometer. We were able to classify this data based only on the entropy of the signal.
After we got our emg sensor we began capturing data. We had to figure out where to put the electrodes. We decided that the side of the face would be the best location.
Once we found that we could reliably capture the EMG data. We then captured a full minute of resting and talking data and a full minute of chewing data. We then put it through a machine learning program and produced the following tree.
Our next step is to recreate the tree on the mbed and classify the data.
Subscribe to:
Posts (Atom)