During my mid-teens, I got this wild idea that I could reproduce the experience of psilosybe cubensis by learning to mimic the brainwave patterns through the practice of neurofeedback. I didn't have an EEG, but I learned about the OpenEEG project. Eventually I bought an OpenEEG-based MonolithEEG[0] during a summer where I was fortunate enough to be in west Europe.
Shortly thereafter, I realized I had no experience at all with electronics assembly, and the fever dream quickly evaporated. The MonolithEEG PCB was lost to time.
To answer your question: My primary goal right now is simply reliable, high fidelity data collection. However, I think neurofeedback is a fascinating application. I’ve been interested in eventually mixing this tech with tACS in a closed loop control system to train the brain to enter specific mental states.
Regarding the MonolithEEG, it's wild to look back at that tech. It is a shame it was limited to 2 channels at 10 bit resolution, but it was a pioneer. With the ADS1299, we are now getting 24 bit resolution across 8 channels. That difference in dynamic range makes a huge difference, especially for precision applications like SSVEP where the noise floor really matters.
Such a striking similarity to my own path. But I was in early 20s-mid 20s, going through some more difficult times and after a lot of research and study of the nervous system and trauma, I came to the conclusion that neurofeedback seems like the magic wand that had the biggest chance to actually produce a transformative effect.
I was experienced with soldering and electronics (mostly board repairs so not design), but not at a professional level. Initially I got an Analog Devices ADC, which they sent for free as I was still registered as a student at the time. I was trying to replicate some existing open source projects, but on an extremely low cost. Ultimately I got stuck in the weeds, and eventually gave up and just bought the ADS1299EEGFE-PDK evaluation board (upon which the original OpenBCI is based iirc). But eventually, again, postponed that, I was in the process of converting the LabView software to C, and to support real-time signal processing. After a short while I moved to the opposite corner of Europe and all those boards are sitting somewhere in my parent's attic. So the question in my mind still remains. Because neurofeedback does sound a bit too good to be true. But evidence is solid as well.
I will definitely give it another go at some point when life gives me more slack/spare time and space.
and if you wanted to measure that you would have to stick electrodes deep into your brain, no way are you going to see what is going on there from the surface.
Stuff I was doing last month got me interested in biofeedback again, I have some talent for it, I can make those mood rings change color at will.
Most of the EEG-based biofeedback devices have three electrodes around the temple and cost about $300 and don't really work because those alpha, beta, theta and delta waves all appear in different parts of the brain and can't be read out of the same electrodes. I hear you can do better with five electrodes but the five-electrode headsets I see don't advertise a price.
I wound up getting a Polar H10 heart rate monitor which can be used with HRV software
but the "biofeedback" apps I have seen so far seem to be breathing exercises that you could do without any hardware. I have electronics for EMG (muscles) and GSR (skin resistance) to hook up to an Ardunio and will probably try making a setup. I'm still looking for a soup-to-nuts answer for EEG biofeedback.
Yeah, I started drafting a sci-fi setting on the edge of fantasy and dystopia where that was the McGuffin last year and what my RSS reader shows me is that science is catching up with that.
This is a super cool project! Probably the most interesting neurotech hardware I've run across since OpenBCI was released.
It would be great to see a side-by-side comparison of Cerelog and OpenBCI data from the same session/patient.
A few questions:
- Could you clarify which parts of the project are licenced MIT, which are CC-BY-SA, and which are CC-BY-NC-SA? It seemed like the guide and the README had more restrictive language than the actual license file.
- What made you decide to start fresh, rather than adding the features you needed to the OpenBCI?
Thanks for the kind words! About the Side by side comparison, that is high on my to do list!
Regarding licensing, sorry about the confusion between my repo init and the docs. I have updated the repo to clarify the distinction: Firmware & Software: MIT License. I want people to build whatever they want on top of the stack. Hardware Schematics: CC-BY-NC-SA (Non-Commercial). Why the split? Since I am a solo bootstrapper, I need to protect the hardware from low-effort commercial clones while I get the business off the ground. But I strongly believe in "Source Available" schematics so researchers and engineers can debug, learn, and modify their own units, hence the CC-BY-NC-SA choice for the board files.
Why start fresh? It was an architecture decision. The Cyton uses a PIC32 + RFduino stack. I wanted to handle everything natively on the ESP32 for high-bandwidth WiFi streaming, which required a ground-up redesign. I also wanted to add onboard LiPo charging and the ability to experiment with different filter topologies. Building it from scratch helped me uncover a lot of subtle design constraints that aren't obvious until you dig into the layout.
Thanks for making this! I'm very tempted to get one of these to do some ssVEP stuff.
Do you have plans to make a 16-channel (or 32-channel?) board in the future? In my area of research, 32 channels tends to be the recommended minimum for studies.
I'm glad you like it! I actually made an ssvep pong game a while back with this, was kinda hard to play as the paddle was really small but was a cool concept demonstration. I am working on a video for this device to show off its capabilities with more depth as the current video on the site is very old.
With regards to higher channel count, yes I was thinking about this however it will likely not be released for a few months or longer. The firmware/software rules change a lot when you start daisy chaining the ADC so dev time takes long and I need to reincorporate back into these software ecosystems. Hardware config is also a bit different.
This is very interesting. I was looking into the viability of something like this a few months ago and started seeing eye watering prices and closed off ecosystems. And many gotchas when looking into diy, more than I could justify learning about.
To be honest, the two biggest drivers for this project were Cost and Signal Integrity.
1. Cost: This was my main frustration. The Cyton is currently priced at
1,249.I managed to get the Cerelog ESP−EEG down to
299 (assembled). I really wanted to lower the barrier to entry for individual researchers and hackers who can't drop a grand on a hobby board.
2. The Bias/Noise Implementation: While we both use the same high-end ADC (TI ADS1299), I implemented the Bias (Drive Right Leg) differently. I designed a true closed-loop feedback system. By actively driving the inverted common-mode signal back into the body, the board follows the TI spec aggressively for helping cancel out 60Hz mains hum
Regarding the analog front-end: The current version keeps the inputs flexible (firmware configurable) for different montages. However, I’ve found that most researchers just stick to a single standard montage configuration. Because the Cyton tries to be a "jack of all trades" for every possible montage, it compromises on physical filtering. For future revisions, I plan to trade some of that flexibility for dedicated common-mode and differential hardware filtering to lower the noise floor even further. I already had this on a previous revision prototype but decided to take not out for simplified testing. I'd like to add it back in to a future revision after some more prototype testing.
3. Connectivity: I’m using the ESP32 to stream over WiFi rather than a proprietary USB dongle. Ive been trying to get BLE SW working as well but noticed MAC drivers aren't the most friendly to my implementation.
The biggest challenge was the SPI communication during the initialization phase. I had a timing violation in the register set sequence that caused the IC to enter unpredictable states.
Because the ESP32 is so fast, I was driving the SPI lines without adequate delay between bytes during configuration. The ADS1299 would technically "communicate" but then behave crazily during data acquisition. I had to go back to the datasheet's SPI timing diagrams and strictly enforce the timing constraints in firmware to get it stable. I wish SPI was a more strictly defined standard
https://www.cnx-software.com/2025/12/26/cerelog-esp-eeg-a-lo...
And here
https://www.hackster.io/news/this-open-source-eeg-board-brin...
Welcome to HN! I hope your project gets some good discussion.
During my mid-teens, I got this wild idea that I could reproduce the experience of psilosybe cubensis by learning to mimic the brainwave patterns through the practice of neurofeedback. I didn't have an EEG, but I learned about the OpenEEG project. Eventually I bought an OpenEEG-based MonolithEEG[0] during a summer where I was fortunate enough to be in west Europe.
Shortly thereafter, I realized I had no experience at all with electronics assembly, and the fever dream quickly evaporated. The MonolithEEG PCB was lost to time.
[0] http://www.shifz.org/moosec/index-Dateien/Page431.htm
To answer your question: My primary goal right now is simply reliable, high fidelity data collection. However, I think neurofeedback is a fascinating application. I’ve been interested in eventually mixing this tech with tACS in a closed loop control system to train the brain to enter specific mental states.
Regarding the MonolithEEG, it's wild to look back at that tech. It is a shame it was limited to 2 channels at 10 bit resolution, but it was a pioneer. With the ADS1299, we are now getting 24 bit resolution across 8 channels. That difference in dynamic range makes a huge difference, especially for precision applications like SSVEP where the noise floor really matters.
Also, for more context:
Reference: https://pmc.ncbi.nlm.nih.gov/articles/PMC7867505/I was experienced with soldering and electronics (mostly board repairs so not design), but not at a professional level. Initially I got an Analog Devices ADC, which they sent for free as I was still registered as a student at the time. I was trying to replicate some existing open source projects, but on an extremely low cost. Ultimately I got stuck in the weeds, and eventually gave up and just bought the ADS1299EEGFE-PDK evaluation board (upon which the original OpenBCI is based iirc). But eventually, again, postponed that, I was in the process of converting the LabView software to C, and to support real-time signal processing. After a short while I moved to the opposite corner of Europe and all those boards are sitting somewhere in my parent's attic. So the question in my mind still remains. Because neurofeedback does sound a bit too good to be true. But evidence is solid as well.
I will definitely give it another go at some point when life gives me more slack/spare time and space.
https://en.wikipedia.org/wiki/5-HT2A_receptor
and flatline
https://en.wikipedia.org/wiki/Median_raphe_nucleus
and if you wanted to measure that you would have to stick electrodes deep into your brain, no way are you going to see what is going on there from the surface.
Stuff I was doing last month got me interested in biofeedback again, I have some talent for it, I can make those mood rings change color at will.
Most of the EEG-based biofeedback devices have three electrodes around the temple and cost about $300 and don't really work because those alpha, beta, theta and delta waves all appear in different parts of the brain and can't be read out of the same electrodes. I hear you can do better with five electrodes but the five-electrode headsets I see don't advertise a price.
I wound up getting a Polar H10 heart rate monitor which can be used with HRV software
https://en.wikipedia.org/wiki/Heart_rate_variability
but the "biofeedback" apps I have seen so far seem to be breathing exercises that you could do without any hardware. I have electronics for EMG (muscles) and GSR (skin resistance) to hook up to an Ardunio and will probably try making a setup. I'm still looking for a soup-to-nuts answer for EEG biofeedback.
It would be great to see a side-by-side comparison of Cerelog and OpenBCI data from the same session/patient.
A few questions:
- Could you clarify which parts of the project are licenced MIT, which are CC-BY-SA, and which are CC-BY-NC-SA? It seemed like the guide and the README had more restrictive language than the actual license file.
- What made you decide to start fresh, rather than adding the features you needed to the OpenBCI?
Regarding licensing, sorry about the confusion between my repo init and the docs. I have updated the repo to clarify the distinction: Firmware & Software: MIT License. I want people to build whatever they want on top of the stack. Hardware Schematics: CC-BY-NC-SA (Non-Commercial). Why the split? Since I am a solo bootstrapper, I need to protect the hardware from low-effort commercial clones while I get the business off the ground. But I strongly believe in "Source Available" schematics so researchers and engineers can debug, learn, and modify their own units, hence the CC-BY-NC-SA choice for the board files.
Why start fresh? It was an architecture decision. The Cyton uses a PIC32 + RFduino stack. I wanted to handle everything natively on the ESP32 for high-bandwidth WiFi streaming, which required a ground-up redesign. I also wanted to add onboard LiPo charging and the ability to experiment with different filter topologies. Building it from scratch helped me uncover a lot of subtle design constraints that aren't obvious until you dig into the layout.
Do you have plans to make a 16-channel (or 32-channel?) board in the future? In my area of research, 32 channels tends to be the recommended minimum for studies.
With regards to higher channel count, yes I was thinking about this however it will likely not be released for a few months or longer. The firmware/software rules change a lot when you start daisy chaining the ADC so dev time takes long and I need to reincorporate back into these software ecosystems. Hardware config is also a bit different.
2. The Bias/Noise Implementation: While we both use the same high-end ADC (TI ADS1299), I implemented the Bias (Drive Right Leg) differently. I designed a true closed-loop feedback system. By actively driving the inverted common-mode signal back into the body, the board follows the TI spec aggressively for helping cancel out 60Hz mains hum
Regarding the analog front-end: The current version keeps the inputs flexible (firmware configurable) for different montages. However, I’ve found that most researchers just stick to a single standard montage configuration. Because the Cyton tries to be a "jack of all trades" for every possible montage, it compromises on physical filtering. For future revisions, I plan to trade some of that flexibility for dedicated common-mode and differential hardware filtering to lower the noise floor even further. I already had this on a previous revision prototype but decided to take not out for simplified testing. I'd like to add it back in to a future revision after some more prototype testing.
3. Connectivity: I’m using the ESP32 to stream over WiFi rather than a proprietary USB dongle. Ive been trying to get BLE SW working as well but noticed MAC drivers aren't the most friendly to my implementation.
Please do.
Because the ESP32 is so fast, I was driving the SPI lines without adequate delay between bytes during configuration. The ADS1299 would technically "communicate" but then behave crazily during data acquisition. I had to go back to the datasheet's SPI timing diagrams and strictly enforce the timing constraints in firmware to get it stable. I wish SPI was a more strictly defined standard