Hola, I'm Claudio Vallejo.

    I'm a Mexican software engineer living in Brooklyn. By day I work at Opus Training, and in my spare time I'm building a heart rate monitor for emotions. Currently, I'm using EEG and EKG technology to build a computer that can measure my emotions in real time.

    This website is my journal. I try to post once a day to summarize what I learned, what I'm doing, or what I'm thinking as I learn how to build an emotion monitor.

    You can find me on Twitter or via email at claudio@c14.mx.

  • May 15, 2025

    1

    I'm in Mexico right now. At my hometown to renew my Visa. Didn't bring my EEG so haven't had a chance to fix the last few bugs and test the video-recording UI. But looking forward to do that when I'm back in NY. The one thing that is constantly on my mind as I work remotely is: How can I be a better engineer? How can I move faster? How can I learn from my mistakes and not repeat them? What are the mistakes I do every day? I work with an incredible team of engineers at Opus. I look up to them to see how I can do things more efficiently, faster, and with as little code as possible. It's hard. I'm the most junior engineer in my team. And I look forward to go to work every day because I love working with my co-workers and also because I look forward to be better than the previous day, do things faster, and take on larger and more complex projects. And along those lines of continuous self improvement, I saw this YouTube video posted by Sequoia Capital on May 8th interviewing Bret Taylor and his opinion on how software business models will change in the next 5 years. Who is Bret Taylor? Bret is a software engineer (although he's done a hole lot more than just software engineering in his career). He's a Stanford grad who started his career working on Google Maps. Later, he co-founded Quip, was CTO of Facebook, Co-CEO of Salesforce, and currently Co-Founder of Sierra. What is Sierra? Sierra is a company that helps businesses build better customer experiences with AI. The core belief that Bret holds (which led to the creation of Sierra) is that "every company's main digital interface will be an AI agent" (5:55 - 6:05)

  • May 8, 2025

    2

    This morning I finished building all the pieces to start testing video-watching while recording my heart rate activity and brain activity. There are bugs yet to fix. But that's what I'll focus on tomorrow morning. Once I fix the issues I'll run some more tests against longer videos with more video clips and will find a time to watch a horror movie.

  • May 7, 2025

    3

    Between Sunday and this morning I was able to bring together everything I've built into a single experience. This morning I was able to record a video using my web cam while also streaming my heart rate data from my heart rate monitor and stream data from the OpenBCI Ganglion. I was able to capture all this data while watching a 2min video. Data for an 8sec video clip I defined was captured. Everything is looking good. There are 3 things I'd like to test next:(1) Test the quality of the captured data (2) Test the setup works for longer videos (3) Test the setup works for more and longer video clips I've been using Cursor to build this app. And it's incredible. But I have found myself feeling very frustrated using Agent mode with Claude 3.7. Claude often does things I never asked it to and when it undoes changes it sometimes undoes changes to a previous, unsuccessful change we had just tried out. This morning I worked with Agent mode using Gemini 2.5 Pro. It's incredible. It's very thorough. It takes a bit longer to execute prompts and make code changes. But it's changes are thoughtful and intentional. It gathers context about the problem, makes a few attempts and very clearly prompts you to guide it with an issue it's having achieving your original prompt. It's really nice to work with Gemini. Gemini 2.5 Pro has been great so far at coding and fixing things for me. While Claude 3.7 has been annoying and not "thoughtful" or intelligent enough. I'm traveling this weekend. But I will bring all my equipment to continue testing and refining the app. I should start capturing EEG data for video-watching later this week. And hopefully, start capturing EEG data for movie-watching in 1 - 2 weeks. The more challenging part will be to find a place where I don't feel safe and where I could more strongly recreate the deep feelings of fear I tend to feel when I'm alone. At night. Watching a horror movie like The Conjuring or The Witch. When there's someone else in my apartment, even if not in the same room, I don't feel scared. I feel safe. It's when I'm alone that my head starts to spin a little more and the horror movie feels more real.

  • May 3, 2025

    4

    Between Thursday and this morning (Saturday) I was able to finish creating the experiment setup. I am connecting to my 3 devices (webcam, heart rate monitor, brain activity monitor), selecting a movie, and choosing an emotion to measure during the movie watching. The last step is to create the database modeling to handle all the experiment test data relations and test the capture of the data. I'm going to start with a 2min video with 1, 30sec video clip. If that works and the data looks good. I will double. And I will double or triple several times more until I'm confident I can reliably capture data during video clips for a 2+ hour video (aka a movie) with 15+ video clips. I found a really cool YouTube channel. It's called NeuroTechnology Exploration. The channel has very thorough, professional videos on introductory topics to understand neuroscience from a various fields (anatomy, physiology, technology). And it even has a very advanced video on EEG and BCIs. This is the level of understanding I'd like to develop over the next few years to have a much firmer sense of the field's limitations and potential to create the emotional monitor computer I'm trying to build to quantify the biology of emotions, and in turn, some types of mental health disorders.

  • Apr 30, 2025

    5

    This morning I created the EXPERIMENT model, built the UI to display and create experiments. And started working on the experiment detail page. My next step will be to work on the UI to create an experiment's protocol. This protocol will just be a markdown text-editor (a string field in the db) that will allow me to specify the steps and details and requirements for conducting the experiment. After I wrap that up I'll move on to build the setup interface. This setup will run the checks against all the required inputs to make sure things look good to play and record!

  • Apr 29, 2025

    6

    I recorded my first brain activity session this morning! I built the pending database models and added UI to CRUD brain activity recording sessions (BARS) Before I can record a new BARS, I learned I have to run an impedance test. An impedance test checks the resistance in each Ganglion channel. The higher the resistance, the lower the quality of the signal received by the Ganglion. So before every BARS, I check the impedance of my FP1 and FP2 sensors (the 2 electrodes I place on my forehead). I can only start recording a session until the impedance from both electrodes is less than 150 omhs. I'm saving the binary data for each data packet emitted by the Ganglion. And the Ganglion (at least by default) emits 105 packets PER SECOND. My next step is to design and build the experiment UI. The experiment UI will allow me to relate all the input types to then analyze them and iterate on the experiments. I will also run some experiments with 2-3 minute videos. I'm not going to watch The Conjuring just to experience a bug or error that ruins the recording. After work, I designed 5 new database models that will help relate and store the experiment tests: experiment, experiment_test, experiment_test_data, video_emotions_data, and timestamps. Oh yes. I've decided to add a fourth data input. Under the hood they are called "timestamps". But I will use these timestamps to manually mark moments during the movie when I'm feeling the "target_emotion". For example, as I watch The Conjuring, I will configure a button on my keyboard to create a timestamp or marker at that moment in time. For The Conjuring, the target emotion I'm trying to measure is fear. And as I watch the movie, I will record moments when I feel fear. I will define what "fear" feels like and try to stay aware of my body during the movie. These timestamps could help bookmark my movie-watching sessions. Since I won't know when the other biometric readings are recording, this will allow me to mark parts of the movie as I experience it. Some of these markers may overlap during recording sessions. Others may not. And that's okay.

  • Apr 28, 2025

    7

    This morning I was able to connect to my OpenBCI Ganglion and record brain activity sessions. I was also able to run an impedance check on the Ganglion input channels to test the quality of the signal. My next step is to build the db models and create controllers to update the db with the new data. I didn't expect to see so much data. The Ganglion emits about 105 "packets" of data every second. That's 31.5k packets of brain activity data for a 5min recording session. That's a lot of data compared to the heart activity emitted by the Polar H10. I'm not sure about this, but it's likely that it sends 1 packet per second. I'll have to look more into the storage of the brain activity data. I don't know much about postgresql read/write limits and storage limits. Not sure the read/write will be an issue since these writes don't have to be immediate and can happen in bulk or in chunks.But still. Curious to know how many 5min brain activity sessions my database can handle. After a bit of reading about psql, it seems like creating and getting 30k records is not at all an issue. Don't know the limitations, but psql can easily create and resolve queries with millions of records. A 5min brain activity session at 105 samples / second will result in about 900kB of data (according to Claude). I also learned that it's best to store the raw binary data from the Ganglion stream. That way, I can play around with different post-processing functions when analyzing and visualizing the data. Which makes me realize... I will also store the binary data for heart rate recording sessions.

  • Apr 27, 2025

    8

    I'm building an app that records biometric data while watching emotive scenes in a movie. Why? Because I'm curious to know if there are recognizable brain activity patterns hidden that correlate with human feelings. For these movie-watching experiments I'm capturing 3 biometrics in 5min segments of the movies:1) Videos of my face 2) My heart rate (in beats per minute, bpm) 3) My brain activity (???) Yesterday, I built a video recorder. I can connect an external camera (an Opal Tadpole) to my computer, and my app uses that camera to record and create mp4 videos. Today, I'm going to focus on building a heart rate monitor. I'm going to try to connect to my Polar H10 using Bluetooth and capture my bpm for 1min. Like the video recorder, I want the heart rate monitor to have a record button. Which will start a recording session. And then, at the end of the recording, I will be able to compile the session's data and show a graph of the data. Let's see what I can do in 1 hour. In the last hour I was able to connect to my Polar H10 using bluetooth. Stream bpm data in real time to my NextJS app. I created new db tables to save bluetooth devices, pair them, and manage them from a settings dialog (or popup). Next, I'll build a heart rate monitor recorder to create heart rate sessions. Let's see what I can do in 30mins. It was way more than 30mins. But... I finished building the heart rate functionality. I can live stream my bpm data and record sessions of my heart rate activity. The last step will be to build the brain activity functionality. I need to live stream my brain activity data and record sessions of my brain activity. I'm not exactly sure yet what data I'll store for each session, but I'll see what I can extract using my Bluetooth connection. It's very easy to extract the heart rate activity data from the Polar H10. And implementing the extraction logic was quite straightforward. I hope to use a similar approach to extract whatever raw data the OpenBCI Ganglion emits. I built the base UI to record brain activity. Tomorrow morning I'll build the db models and stream data from my OpenBCI Ganglion.

  • Apr 26, 2025

    9

    For the last week I've slowly been building a UI to conduct the movie-recording experiment. I just finished building the video tab / section of the NextJS app. This tab allows me to add movies to my app and create video clips from that movie. To avoid spoiling the movie for myself, I've been prompting Claude to generate 5min video clips from the jump scenes listed in Where's The Jump? It's very easy to create video clips with descriptions. Claude just formats the timestamps of each jump scene by adding 2.5min of padding before and after the jumpscene to create a video clip. Next, I'm going to design the experiment part of the NextJS app. Not sure about the details of how or what this part of the app will do or look like, but the goal of this part of the app is to automate my biometric recordings (heart rate, face video, brain activity) while watching a video. BRB...

  • Apr 17, 2025

    10

    I've spent the last few mornings trying to figure out how to convert a DVD into an mp4 (for private use only). It took me a while. Much longer than I expected. Here is the flow: 1) I ended up using MakeMKV to extract an .mkv file from the DVD. 2) Create an mp3 file from the main English audio track. 3) Create an mp4 file for the main title video. 4) Merge the mp3 with the mp4 into a new mp4 file. This was the process I ended up with after exploring with several different methods that resulted in either very poor video quality, the wrong audio track, or no audio track at all. Along the way I iterated on an interactive CLI prompt that allows me to generate a quick 5min sample video clip to test the video and audio before I run the full video conversion (which takes about 10min). I made the mistake of running the 10min task multiple times only to find myself watching a video with no audio. My next step is to build the UI to CRUD video clips from these mp4 video files. It was fun to learn about .mkv files (Matroska Video). They're a multimedia file type that store video, audio, subtitle tracks, and metadata.

  • Apr 15, 2025

    11

    I created a draft to start recording my EEG/EKG data while watching movies. I got a DVD player and The Conjuring and Sinister. I started designing an app to view my videos and save video clips (which will be used to record EEG/EKG sessions). But then, as I was designing the app, I stopped. Designing and building custom software is a lot of work. What would I do if I didn't know how to code? Answer? There's an app for that. QuickTime Player to watch and trim videos. HandBrake to digitize DVDs into mp4 files. Etc etc. As much as I love designing and building software, I need to conduct each experiment with minimal cost and time and acquire biometric data in a controlled, repeatable way to start analyzing the data. Only when I absolutely need to design custom software, will I design it. But other than that, I will find the right tools and documentation form to conduct and keep track of these experiments.

  • Apr 12, 2025

    12

    Today I made progress on designing the local web app I'll be using to manage the source videos for my experiment. This app will allow me to play with the .mp4 movie file. This will allow me to scroll through the video and add some UI to save clips of the movie I'd like to pre-define to record my biometric data. A simpler solution will be to record myself for the 1.5+ hrs of movie time. That would be great. But I need to charge my EEG device and make sure my heart rate monitor is also charged. Which I don't want to. And don't need. I just want to record biometric data for 3-5min parts of the movie. These parts of the movie are emotive scenes which I expect to capture a specific emotion I'm trying to spot in my EEG (or EKG) data. This might be a little too much work to just record myself. But this will make it easy for me to record future movies. And to eventually define a step-by-step protocol to achieve the best quality results (and avoid any mistakes that have cost me time). A few other things I was thinking about today: 1) I don't want to build this with AI. I want to build it myself. It will be slightly slower. But my goal is to build things quickly. And when I'm stuck building something. To just break down the problem into smaller parts. Building with AI alienates me from the system I'm building. It's easy to make mistakes and write complicated code with AI because I'm the only contributor to this project. Having other contributors forces me to more carefully review my code. And even though I will move significantly slower, I want to focus on moving fast and understanding everything very well and building things intentionally. 2) What if this initial experiment could be used to capture other people's biometric data, too? If I can't spot any particular patterns in my EEG data, one other variable I can play with is scale. How can I record other people's EEG data while they watch a movie? How similar is EEG data across people? I'll finish designing the last few pieces of the app and then I'll get to build the app. For now, I'm taking a quick break from the EEG book I was reading. I'll have to come back to it once I've captured a handful of sessions and have to analyze my brain activity data to see how I approach the analysis strategically and with a good fundamental understanding of what I should look out for. The EEG book has some great chapters on the essence of reading brain activity data.

  • Apr 11, 2025

    13

    This morning I sat down to draft the experiment, the different steps, and some of the materials I'll need to conduct it. The goal of the experiment is to capture my EKG, EEG, and a video of my face while watching emotive snippets of a pre-selected movies. The question I'm trying to answer from the generated data is: Are there identifiable patterns in my EEG recordings that map to the emotion I labeled myself experiencing during those emotive movie clips?

  • Apr 10, 2025

    14

    My heart rate monitor arrived last night. I got a Polar H10. I love the experience of measuring my heart rate. I love it because it straps around my chest, it's invisible, and the data is easy to read and export. Not only that, but the battery supposedly lasts for 400hrs, it can connect to your iPhone or any other fitness tracking apps (like Strava), and it comes with an app, the Polar Flow app, where it makes it super easy to record new sessions. I was able to export a 5min log of my heart rate. It comes with a bpm value for each second during the session. That's very close to how I want to be able to export my emotional data in the future.

  • Apr 9, 2025

    15

    Yesterday I learned about Nudge – a startup based in SF founded by Fred Ehrsam – whose on a mission to improve daily life by developing the best brain-computer interface technology. Open nudge.com It's exciting to see new startups (with a very experienced team) jump into this field and develop technology to help address some of the worse kinds of human suffering – addiction, chronic pain, and anxiety. Today, I started reading Brené Brown's book, Atlas of the Heart. I chose to read this book because it maps the range of human emotions and experiences in an approachable, easy-to-read way. What is my experiment? To capture EEG data, facial data, and heart rate data while I watch a movie or video. I will then export the data and try to 'footprints' or signature brain activity patterns for specific emotions. But what emotion should I try to identify first? What even is an emotion? That's where Brené's book comes in. I thought of it this morning while having coffee with Ema. And Brené's book is a great map of human emotions. The book, Atlas of the Heart, explores 87 emotions and experiences. They are thoughtfully organized into groups that allow Brené to better explain what each emotion is in context of related emotions (for example hopelessness, hope, despair, sadness, and grief are part of the the "Hurting" group). The book lists 87 emotions. These emotions come from an emotional analysis she made in 2013-2014 of 550,000 comments from 66,000+ participants that took one of her online courses. The comments were analyzed based on how often the emotion / experience appeared and how easy / hard it was to label the emotion. She arrived at a list of 150 emotions and experiences, and from that list, Brené invited a group of diverse and experienced therapists to help her further narrow her list. The list of emotions Brené covers in The Atlas of the Heart is very thoroughly researched and communally drafted. What I find interesting is that Brené acknowledges that there is a lot of ambiguity about what emotions even are. She even makes a joke in the introduction of her book that there are as many theories of emotion as emotional theorists. The one thing that most "emotion experts" do agree on? That there are universal voice and face expressions of emotion. Okay. So, back to my experiment. What emotion should I try to recreate in myself to see if I can find some form of signature pattern in my data? Honestly, it's kind of hard to pick one! As I read through Brené's list of emotions – stress, fear, regret, sadness, joy, anger, contempt, disgust – I'm picturing myself sitting down and trying to watch a video or movie clip that may elicit a specific emotion repeatedly. Maybe disgust? Or anger? Or sadness? I could watch a movie and play a scene where a son is seeing their mother die in a hospital. Those really hit me. But not sure if it will hit me all wired up? You know what noticed in Brené's list? I noticed that there is no group of feelings or experiences related to sex or procreation. I know it's an uncomfortable topic. But. The desire to procreate is, in my opinion and without any form of data to backup my statement, one of the strongest instigators of action. And not only that, but in males, there's a clear physical sign of arousal. Sorry if this is uncomfortable to read. I'm trying to find an emotion to test against in my experiment. And now that I think about it, ideally, this emotion comes with a clear physical indicator. That's a relevant tangent. But, for context, I'm going to stick to an emotion listed in Brené's book. And if I fail to make any progress with one of those emotions. I'll explore more widely and re-define my criteria for selecting a good emotion candidate. For now, I'll continue reading. And my gut tells me that sadness or disgust are 2 good candidates. Oh... random thought. I wonder how much easier it would be to recreate fear if I watch movies or videos that make me deeply afraid? As a kid I was terrified of spirits and demons. The idea of going to hell kept me up at night. And the idea of being possessed? Even more so. I could watch The Exorcist (have never watched it before) late one night. Alone in my apartment? Maybe that could help? Anyway... will keep reading. But I'd like to have a draft of my experiment and plan published on my blog by Saturday. That way, I can start setting up this crazy movie setup.

  • Apr 8, 2025

    16

    Reading through Chapter 4 and I'm learning about bipolar montages. A bipolar montage is a way of measuring brain activity by comparing voltage differences between neighboring sensors. The author does a very good job of walking through the ins and outs of the montage, what certain kinds of readings mean, how to identify peaks of negativity (or positivity) in a bipolar montage, and the electrical gradient maps these montages produce. I wonder if a first step at trying to decipher emotional "footprints" is by wearing an EEG device while watching a movie, video, or show? I find myself feeling a lot while watching things at home. And I think a good experiment would be to record the following data while watching a movie or video (the shorter and more "emotional", the better): 1) EEG data 2) Heart rate data 3) Video of facial expressions I want to acknowledge that I've been all over the place since I started working on this "computer to measure my emotions" project. It's still my goal. But, building that computer will require a lot of other experiments to get there. I've made a lot of incorrect assumptions and have tried to do things that did not make sense or were just wrong. Measuring my emotions in real time? Sounds great. But there are a lot of other things to do before I can do that. Build my own UI to show my emotions and EEG readings? Again, sounds great. But I don't have the expertise at the moment to have a well informed opinion about the readings or EEG technology. It's clear what I should focus on: To use existing technology to record and export data – brain activity data, heart rate data, etc – and then, with the exported data, learn about the mathematical/computational techniques to make sense of it, analyze it, and test it for certain "patterns" that may map to certain "states". A first prototype will not be a device I wear on a daily basis. No. That's WAY to hard to do as my first prototype. A first prototype is something more like a stationary, uncomfortable series of devices I strap on and mount privately at home while I watch a video and see what kind of data I can capture (and label) during these video-watching sessions. This is a much simpler thing to assemble. And a simpler, repeatable, cheaper source of data I can use as a starting point.

  • Apr 6, 2025

    17

    I finished reading Chapter 3 in Practical Approach To Electroencephalography. Chapter 3 introduces common EEG terms. And while reading about epileptiform activity (which is a fancy way of describing peaks in an EEG graph) I learned something very important about how EEG is used to diagnose epilepsy. Epileptic events are rare while a patient's EEG is being taken. So how is epilepsy diagnosed? Through abnormal EEG reports. These abnormalities happen between seizures. And here is my favorite part: "It is these interictal 'footprints' that make the EEG such an effective tool in the diagnosis of seizures. Epileptiform activity is felt to represent increased cortical excitability or irritability." This is really important, because it helps me consider the idea of identifying footprints for other "abnormal" brain states or brain activity patterns. What are the 'footprints' of suicidal ideation? If there are 'footprints' or well-known brain activity patterns detected by EEGs to distinguish a patient's state of wakefulness, drowsiness, or sleep, then why can't there be more footprints to discover for other states of being? I also wonder why EEG technologists continue to specialize in reading EEG graphs (which are so hard to read) and why not devise simpler and more reliable forms of data visualization or pattern recognition to aid in the assessment's of a person's brain activity health.

  • Apr 4, 2025

    18

    Last night I worked through a unit in Khan Academy to learn about composite functions and invertible functions. This morning, I moved on to the next unit: trigonometry. This is really important for me to understand because trigonometric functions are root of analyzing and processing signals. After going through the first lesson in trigonometry, the fundamentals of the sin, cos, and tan functions are very simple. They are functions that take an angle (or radian) as an input, and they output the length of a side of the parent triangle. This fits into signal processing because the (x,y) values of sinusoidal waves oscillate between 1 and -1. Which can be mapped to values in the unit circle. I didn't write this clearly or eloquently. But that's okay. Just repeating what I learned this morning. And, here's a bit more repetition: cos 𝛑/3 = 1/2 sin 𝛑/3 = √3/2 cos 𝛑/6 = √3/2 sin 𝛑/6 = 1/2 cos 𝛑/4 = √2/2 sin 𝛑/4 = √2/2

  • Apr 3, 2025

    19

    Today I started reading Chapter 3 of Practical Approach to Electroencephalography. And while reading about the essential terminology and concepts underlying EEG (frequency, amplitude, location, electrodes...) I was surprised to see the 10-20 system's proponent was introduced as "Jasper". Jasper? Who is Jasper? So I grabbed another book I have, Brainwaves by Cornelius Borck. I found Jasper. Jasper, as in Herbert Henry Jasper. He helped conceive the 10-20 System, which is a standard way of measuring and communicating brain activity. Jasper was a psychologist and electrophysiologist and had good relations and communication with the best doctors in America and Europe. He had the connections, reputation, and expertise to help develop the EEG technology. The one quote that stood out the most was this one: "The closer the micro structures of those curves were analyzed, the more the link shifted from neurophysiological knowledge spaces." (Brainwaves, 174) The quote was describing the sentiment in the medical field in the 1930s about complexity of brain wave signals and reliance on more advanced math and physics knowledge to uncover a deeper understand of brain activity. Interestingly I can smell that same sentiment in Mark's Practical Approach to Electroencephalography. Which was published in 2025. Why? Why are doctors cautious of jumping outside their domain so easily? I think that sentiment is a mix of respect and fear. Respect of other fields and their complexities (for example, the respect of the math behind brain waves and Fourier Transforms) and simultaneously, the fear of delving too far away from their area of expertise. Why is that? The more I read about brain waves, the more curious I am about it's mathematical and computational roots. And since brainwaves are signal-like events, signal-like events can be analyzed using Fourier Transforms. A Fourier Transform (mostly referred to as "FFT") is an algorithm used to take the graph of a signal in the time domain and graph it in the frequency domain. This transformation is beautiful. Because when you graph a signal in the frequency domain the frequencies that compose the signal are revealed. Signals can be composed of many sub-signals. Signals can add up and subtract down on each other. And the resulting shape of the signal can be decomposed into sub-signals with very specific frequencies. That is beautiful. And at the same time, it's also quite confusing. How is that possible? And why is that possible? The best video I've seen that shows how signals combine is Grant Sanderson's What is a Discrete Fourier Transform? posted in The Julia Programming Language channel. For a math-heavy explanation, I found Reducible's Discrete Fourier Transforms video helpful (but hard to follow since I don't have a strong grasp on fundamental math concepts like trigonometry and linear algebra). And for a historic explanation of Fourier Transforms, Veritasium's The Most Important Algorithm of All Time is a great video. Which I'd call a mini documentary. Want to guess how FFTs were used? To detect which countries were conducting nuclear tests. These 3 videos have helped me get a better feel for the math of FFT. But it's still not enough for me to deeply understand it. Which is why, as I continue to read about EEGs and the history of the technology, I'm re-immersing myself into trigonometry, complex numbers, and then eventually, into the discrete fourier transform and the fast fourier transform. Working through Khan Academy's Precalculus course. And learning about composite functions. I'm very familiar with composite functions because building user interfaces with React is all about composing functions. In React, it's functions all the way down. And the one thing I appreciate from this very basic unit I'm working through about composing functions is how sometimes, it's more efficient to skip nested functions if you have all the inputs needed to compute the last output in the chain of functions.For example: f(a) → b, g(b) → c, h(c) → d. If the end goal is to compute c, it might be worth thinking about how to create a new function, f'(a) → d.

  • Mar 31, 2025

    20

    Good morning. This morning I started reading Chapter 2 of Practical Approach to Electroencephalography. The chapter is called "Visual Analysis of the EEG: Wakefulness, Drowsiness, and Sleep". I'm slowly making my way through the chapter. I have to pause frequently to look up definitions for terms I'm not familiar with. Mainly, the terms used for the different brain regions. I will become more familiar with these terms, and after reading them over and over, I'm starting to get used to them. But I have to consciously pause, make the mental map of where the "posterior" part of the brain way is, to then understand where the "posterior rhythm" is happening. Even though I learned about the different sensor names used by the 10-20 System, I wasn't perfectly familiar with the general names of the brain regions as described by the system. So I Googled maps of the EEG brain regions, and now I'm familiar with the 6 brain regions. Even though it's slow for me to know which region is where. The six regions are (Fp) Frontopolar, (F) Frontal, (T) Temporal, (P) Parietal, (C) Central, and (O) Occipital. As I continue to read the chapter, I find it fascinating that there are patterns of brain activity that can be reliably associated with what a person is doing (or how they're "feeling"). For example, there are very particular brain activity patterns that show when a person is awake with their eyes closed. There are also brain activity patterns for drowsy states, and for different sleep stages. How many more brain activities are yet to be discovered??? The possibilities excite me.

  • Mar 28, 2025

    21

    From Jonathan Sadowski's The Empire of Depression on how to solve the problem of diagnosing depression:> One possible solution is to use local categories, or "idioms of distress," that are used by particular cultures, instead of supposedly universal diagnoses (Chapter 1, page 17). I don't think this will work. If diagnoses are not rooted in descriptions of biological events, we are just playing with words. We are just relying on the language of the time to describe how people feel. That is the root of the problem: depression is still not understood well enough, and because we don't understand it well we don't know what to look for in a patient's brain/body to identify it's presence. I'd argue that we don't need to understand WHY depression happens. The first step is making educated guesses about what biological state maps to what we label as "depression". From there, we can iterate on that definition. Refine our mapping algorithms. And slowly mold a definition of depression and not-depression. But we need biology as an anchor. I had a question while running: > At what point do babies develop a sense of self? And what about their brains changed? What regions of the brain changed that allowed for the sense of self to emerge / be possible? Why doesn't a sense of self develop for other organisms? Do other organisms have a sense of self? If no, do these organisms have different brain systems than us?

  • Mar 27, 2025

    22

    Good morning. This morning I saved my notes, quotes, and new terms from the first chapter of Practical Approach to Electroencephalography by Mark H. Libenson. I use a custom app that works as a private source-base. I define a "source-base" as a collection of quotes from a variety of source types (books, podcasts, articles, etc). For each source I can save the text, the page / location I found the source in, and leave comments. As I was reading the first chapter, though, I thought it would be very helpful to add an optional date field for each citation. This date field would represent the date of the event described in the citation. For example, let's say the citation is "Hans Berger recorded the first EEG of man in 1929." I would set "1929" as the date. This allows me to display my citations in a timeline. Only citations with a date will be displayed. This was really fun to add because now, I have a "Timeline" tab that groups all citations from all sources in a single timeline. It will allow me to build a historical timeline of any events I'm interested in. I also listened to an episode of YC's "How to Build the Future" podcast. Where they interview Peter Xu, the co-founder and CEO of Doordash. I enjoyed learning about the early days of Doordash, how he started the company with some college friends, and how his experience delivering food for years gave him an invaluable perspective and appreciation for the people using Doordash. What I also really liked about the podcast was how he explained his reason for joining YC. His reason? YC was an accountability system. If they joined YC they would be forced to move fast and build something.

  • Mar 26, 2025

    23

    I read the first chapter of Mark H. Libenson's Practical Approach to Electroencephalography. The first chapter is a brief introduction to electroencephalography (EEG). Here are a few fun facts I learned: There are very advanced imaging techniques (CT scans, MRIs, fMRIs, nuclear medicine imaging). And EEGs have a very important use in assessing neurological disorders like seizures and comas. While advanced imaging techniques are used to create snapshots of a patient's internal anatomy, EEG tests record electrical brain patterns over time. These patterns help doctors assess a patient's electrical irritability in the brain and whether the patient is awake or asleep. EEGs are used by doctors to evaluate seizures, coma states, and distinguish psychiatric illness from organic disease. In the 1700s, Luigi Galvani was the first person to suggest that animal bodies are a source of electricity. He discovered this when he saw the leg of a dead frog twitch when one of the frog's nerve was shocked accidentally. In the 1800s, Richard Caton was the first person to observe electrical activity from the brains of monkeys and rabbits. He used a Thomson Galvanometer, which projected (not recorded) voltage differences. In the 1900s, Hans Berger was the first person to record EEG in humans. He first tried to record EEGs by using the tools to conduct electrocardiograms on patients with skull defects or lesions. The first person to ever have their brain activity recorded was Klaus Berger, Hans' son, in 1920. The last section of the book summarizes each chapter and explains the importance of the topics covered in the chapter. Chapter 4 covers EEG localization techniques which help EEG technologists translate a group of waves into a 3D map of charge and understand how these 3D maps change over time. This section says that one of the main points of interest in understanding EEGs is to learn new insights about the brain from deciphering new patterns in the electrical activity of the brain. Some other topics I'm excited to learn about include EEG montage strategies and electrode placements, artifact recognition, EEG reports, and the biochemistry of EEG waves.

  • Mar 25, 2025

    24

    Good morning. I want to start answering the 2 questions I had yesterday: Why is brain activity measured as the difference between two electrodes? Here's a summary of Claude's response: There are several reasons why brain activity is measured as the potential difference between electrodes rather than absolute electrical output from specific brain regions. Brain waves are very weak (measured in microvolts). And there may be external sources of electricity that could affect the reading. That's why measuring the difference between the readings of two electrodes cancels out environmental noise affecting individual readings. Another reason is because there is no absolute "zero" electrical potential in the body to measure against. We need a base reading to anchor the measurements against. While it's become common practice in EEG reading to place sensors on the patient's earlobes (because earlobes are mostly electrically inactive, easily accessible, and are a consistent reference location across patients) there are other ways to calculate an electrical reference. I also learned that the electrical activity measured by an EEG are a complex mixture of activity from various brain regions. It's not a measurement of the brain region directly beneath it. Why? Because the conductive properties of brain tissue, cerebrospinal fluid, skill, and scalp cause electrical signals to spread diffusely. And by the time the signals reach the scalp electrodes, signals from different regions have blended together. And the last reason is also because the skill is a poor electrical conductor. The skull dampens and blurs the electrical signals generated by neurons before they reach the scalp. Knowing this about the physical limitations of EEG makes me appreciate the technology more, but it also makes me wonder if there are new ways of measuring brain activity using technology that overcome the limitations of EEG. This morning I finished sketching the general flow of information and data models to record my own sessions using the OpenBCI Ganglion as my device. After reading more about how brain activity is calculated, I realize I need a base to test my readings against. And OpenBCI's GUI is a great starting point. They even have their own focus widget. I remember watching this YouTube video and will use it as a guide to setup my OpenBCI GUI + Ganglion sensors to measure my focus levels and test the quality of my connection. Then, once I've validated that, I can run the focus tests with the GUI, record the sessions, and use those as a reference to assess the quality of my computations and recordings using my web app. I will record some session data tomorrow using the OpenBCI GUI. I bought Mark H. Libenson's Practical Approach to Electroencephalography. It's a book written by a Neurologist for future neurologists. From what I read online, it's a great introductory book to EEG. I think this book will serve as a foundation to understand the use and importance of EEG in practical clinical settings. And it will also help me think of ideas and see if my intended uses of EEG make sense. I decided to buy this book yesterday after realizing that I made a very uninformed assumption about how brain activity is measured by EEGs. I need to understand the fundamentals of EEG technology to see how I can use it in innovative ways. But I first need the basics. And I think this book will provide the basics as I tinker with the OpenBCI Ganglion and get a more hands-on learning experience.

  • Mar 24, 2025

    25

    Good morning. I had a chance to sketch out the user experience for my brain activity app. I have a clearer idea of what I want the web app to do. The general idea is to build a web app that allows me to connect an EEG device (in my case, an OpenBCI Ganglion) and record brain activity sessions. These sessions will be labeled and grouped by experiment. Each experiment will have a title and description, an EEG device and EEG sensor setup, and recorded sessions. This will allow me to get a feel for designing data models and UI for my experiments. This is completely new to me. Two fun UI components I'll have to design will be an interactive EEG Sensor Setup component and a data streaming session recorder. The EEG Sensor Setup component will allow me to visually select which sensor-placements (using the 10-20 System naming convention) are required for the experiment by selecting blocks from the top-view of a head. The session recorder component will be a UI that makes it easy to stream data – in an open-format way with a start/stop button. Or, by using pre-defined timed sessions (like Apple's Timer app on iOS). My first experiment will be to use the OpenBCI Ganglion to stream brain activity data from the FP1 and FP2 sensors to measure my focus levels. I will record three 1min sessions of brain activity. For the 1st and last sessions, I will try to calm my mind and stay focus. And for the 2nd session I will focus on making mental calculations to solve an algebra problem. I will then see how my recorded brain activity changes between each session. And over time, see if there are recognizable brain activity patterns when I'm calm versus when I'm focused on solving a math problem. As I was sketching the data requirements for the app described above, I opened Netter's Atlas of Neuroscience and Ken Ashewell's The Brain Book. I was using these books to find all the different sensor position names used by the 10-20 System. I was finding a lot of different images online and was curious to see if these books showed a picture with an "official" sensor map. I didn't find the sensor map I was looking for. But I did find a few interesting quotes. From Netter's Atlas of Neuroscience, "EEG permits the recording of the collective electrical activity of the cerebral cortex as a summation of activity measures as a difference between two recording electrodes" (Chapter 1: Neurons and their properties, page 35). This helped me realize that I don't actually understand how brain activity is captured. I need to read more. Specifically, about the "difference between two recording electrodes" piece. Why difference? Why not just raw electrical output? Then, from The Brain Book, I found this quote: "An important difference between spoken language and mathematical reasoning is that spoken language may be carried out without a fully conscious awareness of the details of sounds, whereas mathematics demands full conscious awareness of the symbols being used... the process of mathematical reasoning is always a conscious one and usually dominates our cognitive function while it is being performed" (Chapter 6: The Social and Thinking Brain, page 202). Great. It seems like solving a math problem is a good choice to measure difference in focus levels in my experiment. I have to run to work.

  • Mar 22, 2025

    26

    Today I finished reading Jonathan Sadowski's The Empire of Depression. It's such a great book. It provides a good historic context for the challenges we face in defining, diagnosing, and treating mental health disorders. Specifically depression. There is so much work to be done in this field. So much work to do to align on what these mental disorders are, how to define them, and how to start measuring progress to help people stop suffering from these painful mental states. After reading this book I drafted a brief hypothesis of my vision to quantify brain activity using a 2D brain activity map. The changes in electrical activity in the map over time could be used to identify patterns, and from those patterns, extract emotional data. This is a hypothesis. And very much a fantasy right now. But I hope to continue learning about EEG technology and the physics and math required to decipher these patterns to build a device I can wear in the future that will log my emotional activity throughout the day. I continued working on the web app I'm currently using to connect my OpenBCI Ganglion via Bluetooth to measure electrical activity in microvolts and display those measurements on the web app. I designed and built the UI several weekends ago while in Texas, and today I picked back where I left off. I built a web app that simulated connecting to a device via bluetooth and then display a microvolt value. Today I built the logic to actually connect to my Ganglion device via Bluetooth using the Web Bluetooth API and decompress the data stream into microvolt values using the OpenBCI documentation. I was able to connect the UI but the values aren't showing yet. How do I know? Because the moment I disconnected the sensor attached to my forehead, the data stream continued to show values above 0. I was expecting 0 volts to be coming from that sensor... So I'll continue to work on this as soon as I get a chance this weekend.

  • Mar 20, 2025

    27

    Good morning. As I read Jonathan Sadowski's The Empire of Depression I had a thought: Consciousness is commonly labelled as an "emergent property" of the mind. Right? But... What is the function of consciousness? What are the functions of emotions? What are the high-level input-outputs of the systems responsible for 'creating' (outputting) consciousness? And in turn, what other systems is consciousness a part of? Is consciousness the output of a function? The input of a function? A type of information? A type of function? A variable? A constant? A type of data storage mechanism? A protocol for transferring information? A state manager? (These are all software engineering terms. Terms I work with on a daily basis.) From a systems perspective, I'm curious to know what biological processes we're referring to when we say "consciousness". And I'm also curious to know what those biological processes actually are. And understand the dependence of these processes with other processes.

  • Mar 19, 2025

    28

    Continued reading Jonathan Sadowski's The Empire of Depression. A few topics I enjoyed reading about were the evolution of new therapeutic modalities from psychoanalysis in the 1970s and the factors that influenced their evolution. And while reading about the creation of Cognitive Behavioral Therapy by Aaron Beck, it lead me to read back through Martin Seligman's studies on "learned helplesness" as described by Robert Sapolski in his book Why Zebras Don't Get Ulcers. My next step as soon as I finish reading Sadowski's book (and also, maybe, Sapolski's handful of chapters related to depression) will be to build a wearable brain activity monitor with a single sensor that I can use every day in an unobtrusive way. Also. I was admiring the "medical device" that eye glasses are. But then, as I put my contact lenses on, I realized how creative contact lenses are. They are truly invisible. What if future brain activity monitors are like "contact lenses" and function as invisible caps that wrap around our scalps just as contact lenses wrap around the iris and pupil? This brain activity monitor, or "emotional lens", could allow us to see inside ourselves. To see our "souls", or our emotional activity. The shapes of the glasses constructed by our minds to navigate the world. Here's an idea I had during my morning run. Summary: A Continuous 2D Brain Activity Map What if, the technology used to better understand brain activity (and therefore emotions) is a technology that creates a continuous 2D map of brain activity over time? Just like the Earth (a sphere) has a 2D representation of it's topology, so could our heads / scalps have a 2D representation of the electrical impulses emitted over time. This 2D Brain Activity Map could be used to identify the key signatures of emotions and feelings. How does fear manifest in my mind? Could there be signature brain activities within predictable time intervals that define a specific feeling or emotion? If yes, how do these signature electrical patterns compare across individuals in a family? In a community? In a city? In a country? Globally as a species? How can we define parameters for "healthy" and "unhealthy" patterns of electrical activities? What electrical activities tend to initialize or lead to behaviors? Which of these behaviors are destructive to the self? Destructive to society? More specifically, what is the mapping between key electrical signatures to emotions to behaviors to suicide? To murder? As a species, how can we agree to define an initial open-source definition of these health parameters? How should such parameters be defined? Who overlooks such parameters? What professions and fields should be involved? Should religious professionals also be involved? After all, I believe that if we are able to define such electrical activity signatures that represent certain human feelings, and we then are able to set "healthy" and "unhealthy" parameters, in a way, we would be defining moral parameters rooted in biology. This could be one of the bridges between science an religion. It could also be a way to start mapping what some religions define as "the soul". I don't think there is a soul. I think it's just our human way of describing what it feels like to feel things and to be something. But nonetheless, this might bring an opportunity to bring more science to rigor to therapy, more understanding to human suffering, and more dialog between science and religion. These two fields, science and religion, might agree to disagree. But at least let's agree on a few things and on the important perspective each field brings forth, irrespective on whether there is a God or not. Let's focus on the morals.

  • Mar 16, 2025

    29

    I've been reading Jonathan Sadowski's The Empire of Depression. It's a book about the history of depression. Reading this book has helped me think about the kinds practical applications for wearable, non-invasive brain activity monitors. The historical context is helpful to see how ambiguous tracking and defining "depression" still is. Why is it hard? Because we still don't know what it is. One debate mentioned by Sadowski is where to draw the "illness" line for depression. When is a person's mood or reaction to an event normal and expected and when is it not normal and therefore an illness? My take is that we should draw a line at the extremes, and over time with research and better understanding of the brain, mark new lines towards the more common experiences that we currently label as "depression". I think we should mark the line whenever someone is considering taking away their life or the life of someone else. Suicide and homicide are, in my opinion, reflections of brain activity that are worth labelling as states needing support and attention. How can we create brain activity monitors that make it easy for people to see their emotional state? How can brain activity monitors help in the prevention, diagnosis, and treatment of suicide? What mental states do people contemplating suicide exhibit? What is the brain activity map of suicidal ideation? What does it look like? What set of emotions and emotional intensity marks a person contemplating death? How would people contemplating suicide even consider wearing a brain activity monitor? How could we help those people who are silently dealing with suicidal ideation?

  • Mar 6, 2025

    30

    I'm still in Texas with my family. Haven't had time to work on the brain activity prototype. But a few things that are on my mind are how companies with research departments are set up. How do they manage the timeline of innovations and launching new products while also making money? Another thing I've been curious about is the history of the heart rate monitor. How invented it? How has it developed over the years? What "form" did it have initially (was it always a wearable? Was it just a stethoscope?). I think the history of the heart rate monitor (and the background of the individuals and teams behind it's invention) will be very informative (and inspiring) to develop brain activity monitors.

  • Mar 4, 2025

    31

    Over the weekend I built a UI to show a focus-level reading. I built a UI that makes it easy for anyone to connect their Ganglion on a web app and measure their focus level. I haven't had much opportunity to develop the UI. I'm out of town for a family wedding. But I'm constantly thinking about this project. The one thing that's been on my mind is a brain activity map. Instead of using discrete sensors to measure brain activity, why not design a continuous sheet (or map) that allows us to see the different regions of brain activity. Like a cartesian map. We could define coordinates or some form of system. I wonder what the most informative, yet readable format will be for brain activity. Sensors with their FP1 and FP2 labelling are inhuman. And also, so tedious to set up! How do our brains balance emotions? How do they regulate anger, for example? What would happen to a human if it did not have the ability to regulate an emotion like anger? Are there emotional regulation disorders?

  • Feb 27, 2025

    32

    I read about Bluetooth. What it is. How it works. And I learned that it transmits information by encoding data into radio waves (in the 2.4GHz ISM band) and emitting them through the air. Which are then picked up by other devices that can receive and decode that information. One other cool thing I learned about Bluetooth is what is called "frequency hopping". Frequency hopping solves the problem of different Bluetooth device signals interfering with each other. And to solve this, Bluetooth switches between different radio wave channels (or frequencies) 1600x per second! I also read a little bit about it's security. How easy is it to spy or intercept Bluetooth data? From what I read there are security concerns (like sending and or extracting data from unsolicited devices, control devices without pairing, hijack a device's encryption keys). One benefit of Bluetooth's 10-100 meter connection range is that, in general, attackers need to be nearby to hijack a Bluetooth device. In general, Bluetooth technology is safe to use for most common uses (headphones, keyboards and mice). Risks to be aware of include using old, outdated devices and pairing with unknown devices. Learning about Bluetooth because I'm curious about the technology, and more importantly, I'm curious about how safe it is and how well it can keep user's brain activity private.

  • Feb 26, 2025

    33

    My Waveform Generator arrived! Today I only have 45mins to make progress. And I'm going to draft a plan for Nami: A Brain Activity Monitor. It's okay if the draft and final output is not great. I need a starting point to iterate on. My first goal? Display a focus-level reading in real-time. More specifically... Show a focus-level reading from 0 - 100% Connect to and stream data from an OpenBCI Ganglion Instruct on how to connect to their OpenBCI Ganglion What about connection quality? What about reading-level accuracy? Documentation? Replication? Good questions. I'll tackle those later. For now, I want to focus (haha) on building out the UI and logic to achieve this goal. My tools will be... OpenBCI Biosensing Starter Bundle FP1, FP2 sensors and EEG ear clips NextJS app using the Web Bluetooth API Cursor & Claude Sonnet 3.5-3.7 I found this great video from 2019 by retiutut on YouTube demonstrating exactly what I want to build. retiutut connected an Arduino to his OpenBCI and programmed a fan to turn on when he's in a state of focus. This is exactly what I want to achieve. All the dropdowns and configurations and "smoothening" of the data stream using the OpenBCI UI is complicated and intimidating. It might be perfect for neuroscientists, but not for non-scientists like me! I want to build the heart rate monitor for brain activity. For example, the Apple Watch heart rate monitor is beautifully simple and easy to read and understand. I'd love to build something as simple as the Apple Watch heart rate monitor, but for emotions. I noticed retiutut's Ganglion has a white case and hanging from his neck. That's a great idea. I often find myself weaving my arms through the cables and the Ganglion flying around as I try not to move. Good initial idea! But for the future? Bluetooth sensors PLEASE. Also... why is the 10-20 system made up of discreet electrode placements? Why can't we create a device that measures the continuous output across the scalp surface? That way, we have a "brain map" and can track changes in electrical activity across the map rather than discreet chunks of the scalp area. Would I have to be perfectly shaven to get good quality signals from such scalp-area-map-sensor? Does hair distort or reduce the quality of the electrical signals emitted by the brain? How far do these electrical signals travel? I also read up a little bit more about the electrical frequency ranges emitted by the brain. It seems like the frequencies range from 0.5 - 100 Hz. A set of frequency ranges have been labeled and associated with specific kinds of brain activity. Here's what I read: Delta (0.5 - 4 Hz): Deep sleep and unconsciousness Theta (4 - 8 Hz): Drowsiness, meditation, light sleep, creativity, and deep relaxation. Alpha (8 - 13 Hz): Relaxed, calm wakefulness (eyes closed) Beta (13 - 30 Hz): Focused, active thinking, problem-solving Gamma (30 - 100+ Hz): Peak concentration and higher cognitive functioning How do these frequencies vary across scalp areas? Which combination / duration of these frequencies (or other frequencies!) can be mapped to specific emotional states? Why is the Gamma range so ample? What are "higher cognitive functions"??? I charged my Apple Watch just to see the Heart Monitor app. It's beautiful. The main screen shows: "Current, 64BPM, 69BPM 1m ago" + 3D animation of a beating heartThe 2nd screen shows a daily range: "Range, 62-82BPM, Today" + graph of daily heart range The 3rd screen shows the resting daily heart rate: "Resting Rate, --BPM, Today" + graph of daily resting heart rate The 4th screens shows the walking average heart rate: "Walking Average, --BPM, Today" + graph of daily walking average heart rate. I love these screens. No computations. No configurations. Just opinionated, informative screens that give an overall picture of heart activity. Nice! Each screen in the Apple Watch Heart Monitor app has an "i" button that shows an informative overlay when tapped. For example: "About Walking Heart Rate Average: Your walking heart rate is the average heart beats per minute measured by your Apple Watch during walks at a steady pace throughout the day. Like resting heart rate, a lower walking heart rate may indicate better heart health and cardiovascular fitness. Walking regularly has many health benefits, and you may see your walking heart rate lower over time by staying active, managing your weight, and reducing everyday stress."

  • Feb 25, 2025

    34

    My Waveform Generator arrives today. I bought a more affordable generator, the Koolertron 15MHz from Amazon. I'll use the Koolertron to test the accuracy of the EEG readings. This morning I tried to implement brainflow to use a real EEG data processing library. But since it can't run on the client (it needs to run on a server) I decided to start a new project. I built the Ganglion prototype in my playground NextJS app (the one I use to publish this website). I'd rather build and open-source a new app that can be just about brain activity monitoring. I'm thinking of calling the project "Nami" (wave in Japanese). "Nami: A brain activity monitor". I created Nami. I'll publish it to GitHub tomorrow and start building more functionality into it. I also need to pause and define what the purpose of Nami is and how that translates to the experience of downloading and using Nami.

  • Feb 24, 2025

    35

    Wow. I have my NextJS app streaming voltage measurements from my forehead. This is so cool. I'm going to start a new project and open-source it. I'll build it out in the open. Improve the experience of finding, pairing, and streaming data from the OpenBCI Ganglion. I will also work on the UI. My UI is terrible. Graphs are always confusing. I want a single value that is showing what I care about! Kind of like a heart rate monitor! It does all the heavy lifting for you and shows you the average bpm. What would be a useful "bpm" for brain activity from a particular area of the brain? In this case, from my forehead? I need to do a bit more digging on how I can measure these electrical signals and measure "attention" or "focus" and "calmness". That's what Tero and Kimmo Karvinen (who wrote Make A Mind-Controlled Arduino Robot) did to make an Arduino car go and stop using EEG data. Instead of making a car go and stop, I want to see my VPS (volts per-second?) oscillate. I also wonder if I could test the quality of these signals. How?What if... I bought a device that emits a voltage? What if I have a separate device that can measure the emitted voltage and then I compare the device's reading with my application's reading? That way I can test the quality of the streamed data! Let me look into that. I learned about waveform generators. Waveform generators allow you to emit controlled electric currents. Check out Siglent Technologies' Amazon shop. They sell 3 options that reliably emit electrical currents that could mock brain currents. This will help me test the quality of the algorithms used to display brain activity in my UI! I also learned about head phantoms. Head phantoms are anatomically accurate plastic human heads that emit realistic electrical currents. Check out this PubMed article for more details. I don't necessary want to build a "head phantom". But for sure something like it! Something I can quickly setup and use to test my algorithms.

  • Feb 23, 2025

    36

    I've spent the last few hours trying to stream data from my OpenBCI Ganglion to my Mac without using the OpenBCI GUI. And it's been very challenging. I've tried using python. I've tried using the Web Bluetooth API. I've tried using a basic NextJS setup. I have issues finding my Ganglion. And then if I eventually do find it with the Web Bluetooth finder, I can't connect to it or stream data. I will consolidate my learnings into a repo and open-source it. So that others can easily connect to their Ganglions and stream data easily. Making progress. I was not able to connect to the Ganglion via the Web Bluetooth API because it supports Native Bluetooth. And the USB dongle supports BLED112. What is BLED 112? From Claude: BLED112 is a USB Bluetooth Low Energy (BLE) dongle created by Silicon Labs (formerly Bluegiga). It's essentially an external adapter that adds Bluetooth Low Energy capabilities to computers and other devices that don't have built-in BLE support. 5 hours later and I'm streaming EEG data in micro-volts on my NextJS app using TypeScript!! It took me 5 hours to:1. See my device in my Bluetooth list using the Web Bluetooth API Connect to my Ganglion Stream brain activity data I'm going to open-source this. What I built should make it super easy to connect a Ganglion to a Mac using TypeScript. I don't understand the inner workings of how the data stream is converted to micro-volts. But I used OpenBCI's Ganglion data transfer doc as a reference for the parsing. I'll also work on a UI that makes reading EEG data less intimidating and much more user friendly. Current idea? A speedometer-like UI. It took me 4 hours to achieve step 1. I failed to see my Ganglion device in the available Bluetooth devices for way too long. Why? Because the Ganglion has a display name of "Simblee". I randomly discovered that as reading through this OpenBCI Ganglion doc. Okay buenas noches!

  • Feb 22, 2025

    37

    I'm going to build an app that reads data streamed by a Bluetooth connection to my OpenBCI Ganglion. Will start processing data from a single sensor. Will learn about how to measure the quality of the data stream and how to measure microVolts. I was very unsuccessful. I was not able to stream data from a single node to my Mac. I wasn't able to connect my Ganglion to my Mac. I know I can connect it using the OpenBCI app. I was able to do that earlier this week. But I wanted to try and set up a TypeScript project to stream my data since I'm most comfortable with TypeScript. I wasn't even able to get the dependencies installed to connect to my Mac's Bluetooth port. So I tried with a python app and OpenBCI's python library. Didn't work either. Tomorrow I'll give it another try. Reminds me of the days when I started learning web development. It would take me 6 hours to build a simple navigation menu. Okay. I connected! Claude wrote a script to detect Bluetooth devices around me and finally found the Ganglion!

  • Feb 17, 2025

    38

    Today I played with the OpenBCI biosensor for the first time. I setup my Ganglion board with all the sensors. I charged the battery for like 2 hours? And I did my best to wear the sensors to capture my EEG. I did see some EEG and recorded my first test! The EEG sensor "quality" was poor (orange/yellow/red). I wasn't able to get the lights to turn green. I'll continue looking into it tomorrow. I do have to say: wearing an EEG in the current state of the technology is TERRIBLE. The OpenBCI software is very confusing and hard to use. Connecting the pins to the Ganglion is straightforward, but I learned only by watching a YouTube video. What I found the most frustrating was all the cables. How can I "simulate" a real-world environment and feel things I would under normal circumstances with 4 cables hanging over my head?? This technology is very promising and super exciting. And I look forward to get a basic project setup and then, little by little, start tinkering and improving this technology. No wires please! And a nice non-medical-grade user interface!

  • Feb 16, 2025

    39

    I just finished reading and taking notes from Make a Mind-Controlled Arduino Robot. I only read the useful parts (the Preface and sub-section of Chapter 2: Coding). The most important learning I made was about the OpenEEG project. It's a collection of plans and instructions for people who are getting into EEG and want to build their own EEG. In my case, I already have an EEG (OpenBCIs Biosensing Starter Bundle). But what is very cool about the OpenEEG website is that it has all the information anyone needs to build their own EEG and use the available, public software. I think this is where I'll find the algorithms I'll need to use to start processing the data streams captured with my biosensor.

  • Feb 15, 2025

    40

    My OpenBCI biosensor is here! To start learning about biosensors, I'm going to build a simpler computer. I'm going to build a computer that makes the following transformation:brain activity > { Focus: % } There's probably an algorithm that already does this. Training a model is not necessary. BUT what's good about having an algorithm is that there is a programmatic way of proving the correctness of the model's output. Which is perfect! Because the output of that algorithm will be necessary to test the model's accuracy. A few years ago I randomly bought Make a Mind-Controlled Arduino Robot by Tero and Kimmo Karvinen. I'm going to skip the mechanical / arduino chapter. And jump straight to the software chapter. I want to have the same setup as this little project has, but instead of controlling a robot, I want to measure my focus in a custom UI.

  • Feb 13, 2025

    41

    I just ordered the Biosensing Starter bundle. And while I was browsing the OpenBCI website, I learned they sell another product called EmotiBit. This product measures physiological responses related to emotional states. Which would be a great device to wear if I want to capture data that I can include to train an AI model. This morning on my way to work I stumbled on Jack Dorsey's Twitter profile. And he recently shared a 3.5hr YouTube video by Andrej Karpathy where Andrej shares an in-depth review of how to train and develop LLMs and some mental models to understand "the psychology" of LLMs. Here's Andrej's tweet if you're interested in reading it. Who is Anrej? I didn't know about him, but this is his current Twitter bio: "Building @EurekaLabsAI. Previously Director of AI @ Tesla, founding team @ OpenAI, CS231n/PhD @ Stanford. I like to train large deep neural nets 🤖🧠💥" I wonder if it's possible to uncover the "signature brain activity" of emotions just by studying myself. I don't know if there's such a thing as signature brain activity for each emotion. I don't know how emotions are different from each other at a biological / neurological aspect. But I am assuming that each emotion has a signature. And mapping / defining these signature brain activities for each emotion (and finding labels for different brain activities) could help us understand each other. And I wonder if I can train an LLM to output a list of emotions based on my brain activity data. I also assume that the experiences or events that trigger certain brain activities (or emotions) change over time. But I wonder if the signature brain activity that is what we call "sadness" or "fear" or "anger" stays the same. This would be wonderful! Because it would allow us to discover these emotional signatures for a wide spectrum of emotions and use it as a basis to see into ourselves and quantify what we are feeling. I'm excited for when those signature emotional waves / brain activity patterns are discovered and accepted. Because the world will change. And our perception of ourselves will change, too. I'm very curious about the relationships between all these different brain activity states / patterns. I'm very curious to see and measure how different feelings can be felt at the same time because all brain regions responsible for causing those emotions are "active." – like having multiple light bulbs turned on at home and each light bulb allows us to see the world from a different lens. And maybe that's what emotions are – they are sensors and yet they are also lenses. I guess a lens (or an eye) is a sensor! I wonder if emotions are like sensors. Like eyes. But each eye turns on (or opens) based on different triggers and experiences. And when these "emotional eyes" turn on we perceive the world differently which means we think and behave differently. Which is why, maybe, there are so many different parts of me. Some that only "turn on" when I'm with my family. Other parts of me that "open up" when I'm with my girlfriend. And others that close and open when I'm at work with my co-workers. These are all switches. Eyes. Lenses. Sensors. That influence our behavior.

  • Feb 11, 2025

    42

    I emailed the OpenBCI team to see if I can pickup their Biosensing Starter bundle at their office. They're based in Brooklyn, and I don't want to risk my package getting stolen or lost. I need to update my website builder (what I use to compose my journal entries) so that my posts can include more markdown features like links, lists, and code blocks. Found this great article on a study about EEG-Based Emotion Recognition. Will dive more into this article. But it does a great job of outlining the input-output of emotion recognition using EEGs. This is very exciting.

  • Feb 10, 2025

    43

    I'm going to buy OpenBCI's Biosensing Starter bundle. It costs $829USD and this kit allows you to to "acquire and view EEG, ECG, and EMG raw data in real-time." Which is exactly what I want. A 10-20 System has 21 electrode positions. A 10-10 System has 74 electrode positions. The average human adult scalp has 100,000 – 150,000 hair follicles. What if, in the future, we could create hair follicles that functioned as electrodes? We would have 3.3 orders of magnitude more sensors to measure and record brain activity. What if, a future non-invasive BCI (brain computer interface) will be a wig? Haha who knew wigs could be the future. That'd be funny. And what if you could customize that wig as easily as you customize your phone's wallpaper? Long hair one day. White hair another. Curly blue hair another. And glowing red hair on Sunday, just to see how my Tias react. I started re-watching Lex Fridman's interview with the Cursor Team. I've been using Cursor to build things at work and outside of work, and I'm very interested in the team's perspective about how they designed, built, and trained the models powering Cursor. The foundation of the computer I want to build will require training an ai-model to transform my brain activity (and other inputs) into emotional outputs in real-time.

  • Feb 9, 2025

    44

    Today I've been working on creating a presentation to better communicate my goal of creating a computer that measure my emotions, the problem it's trying to solve, the high-level inputs and outputs the computer might need to output a list of emotions, and the many questions I have about the process, applications, and learnings. A big part of the "computer" I will create will be AI. Specifically training AI to transform a set of inputs into emotional data. I started reading The Age Of AI: And Our Human Future by Henry A. Kissinger, Eric Schmidt, and Daniel Huttenlocher. The purpose of the book is to explain what AI is (historical context, how it differs from traditional algorithms, the different learning methods) and start a discussion about the implications of AI's existence in our day-to-day and in the future. It's a great book and is easy to follow. It's not very technical and what I'm enjoying about reading this book is that it gives me a little more background on the technology I use every day at work and for my personal projects. And as I continue to draft my presentation (which I'll share soon in a future post!) I wanted to share a very important learning I had today while having fun building an animated counter. What I learned was that if AI is struggling to code or achieve the desired outcome I'm trying to communicate to it, that means I'm likely not being specific enough about what I need. And, for programming web applications, I can't be more specific than reading the code it outputted and giving it specific instructions (code instructions) on what to change to achieve the desired outcome. For example, I was asking Cursor (the AI-code-editor I use) to design an animated counter. Where each digit in the counter animates up toward the end value. That is how I started prompting AI. And over the course of the next hour, I started to refine my approach. And the key to refining my approach was to simplify my ask and to be more specific about my ask. To achieve the exact animation I was looking for, I had to first decompose it and understand myself. And then, once I broke down the animation into a smaller sub-component / process, I was able to build that one component. And then, I used that smaller component to build the larger hole. To be more specific, instead of tasking AI to build an animation for a digit of N numbers, I broke down the problem by defining how to animate a single digit. Then, once I had that animation nailed down, I was able to reuse that single-digit-animation and scale it to N digits. It was very satisfying to see how breaking down a problem into smaller sub-units and solving each sub-problem on their own can help abstract logic to then simplify higher-level tasks. I share this here because I think this is an important learning that I will have to continue applying as I try to solve more complex problems to create a computer to output my emotions in real-time. I'm going to buy the most affordable starter kit from OpenBCI. I had a great conversation with a close friend and they helped me realize how I should try to build a prototype (input > computer > output) as quickly as possible to validate my idea. If I can do that and can make progress on creating quality emotional outputs from a brain activity. Then I can start to dive deeper into customizing different parts of the prototype.Let's start simple. Thank you DYZ.

  • Feb 8, 2025

    45

    Today I'm writing a list of things I need to measure my emotions in real-time. My main goal is to create a computer that returns a list of emotions. Each emotion will have a corresponding intensity value (for example, "Sadness: 0.2; Joy: 0.1;"). I want this list to reflect my emotional state not more than one second ago. I don't know what the initial type (or types) of input the computer needs. I also don't know what the intermediary technologies I need to use to transform the inputs into the outputs. I did some research and I've defined my goal. I'd like to create an EEG. An EEG stands for Electroencephalography. It's the name of the technology used to record electrical activity produced by a brain. EEGs use sensors, known as electrodes, placed on a person's head/scalp. These sensors capture the electrical activity produced by the brain. There is an international standard for capturing brain activity and it's called the 10-20 System. This system uses a grid of sensors placed on specific parts of the scalp to capture activity from different brain regions. These systems were created to standardize brain activity measurements and help researchers and clinicians compare their results. The standard was designed by neuroscientist Dr. Herbert H. Jasper in 1958 at the Montreal Neurological Institute. There is another system, the 10-10 System, that uses additional electrodes for more specific measurements. The 10-20 System uses 21 electrodes while the 10-10 System uses 72 electrodes. My initial goal is to create 1 wireless electrode. Commercially available EEG devices (OpenBCI's Ganglion Board, ModularEEG, OpenEEG project) are wired. And my goal is to create an EEG that is wearable and invisible. I am NOT going to wear a helmet-looking device with cables sticking out of it. I don't think it looks cool and I also don't want to draw anyone's attention. I want to measure my brain activity without anyone knowing. I also think it could help me forget that I'm measuring my brain activity so I can go about my day reacting and feeling the way I normally do! My next step is to build a prototype of one wireless electrode. What parts and pieces do I need to buy? What software do I need to create to transform tiny electrical signals into data I can wirelessly stream to my computer?

  • Feb 6, 2025

    46

    Hi, I'm Claudio. I'm a Mexican software engineer building a computer to measure my emotions in real-time. I believe there is an enormous gap in the technology used by doctors, psychiatrists, and therapists to efficiently treat emotional disorders. There is no easy way to obtain biological evidence of a patient's emotional state. And I want to help create accessible, non-invasive devices that allow us to see our emotions as easily as we can see our heart rates. I believe this technology will allow us to collaborate and define a set of health parameters for what we deem "healthy" and "unhealthy" levels of emotional states. I dedicate this journal to everyone struggling with deeply uncomfortable emotional states. My hope is that this technology will help us live a better life with a deeper understanding of what we feel, why we feel what we feel, and as a result live life in more peaceful and joyful emotional states. It's 8PM. And I sketched a plan to record the movie-watching experiment in my NextJS app. I spent the last few hours building a UI that records videos with my video camera and generates an mp4 file. This is the 1st of 3 biometric inputs. The next biometric input will be capturing my heart rate monitor using the Web's Bluetooth API and saving heart rate sessions.