I love music. I've never finished a single track.

I've wanted to produce music since I was twenty. Work always got in the way. Then my first child was born, and suddenly those 3 AM feeding sessions became study time. I took Berklee's Electronic Music Production on Coursera. Learned lo-fi and synthwave on YouTube. Bought Ableton Live, Serum, subscribed to Splice, hoarded courses on Udemy, LinkedIn Learning, EdmProd, etc.

Years passed. Dozens of courses completed. Tracks finished: zero.

Information Overload, Zero Output

The problem wasn't the tools. I was underestimating the gap between consuming music and producing it. More precisely — I was collecting tools without knowing what I actually wanted to make.

DAW tutorials, music theory, drum design, mixing, mastering... it felt like I needed to learn everything. If time were infinite, I'd probably study astronomy too. But trying to absorb a semester's worth of curriculum in 30-minute pockets between work and parenting was never going to work.

I wanted to make music like HONNE, Ravages, HYBS. Spent three months recreating one track. After finishing it, I realized — this wasn't what I actually wanted to do. Building a storyline in a DAW, layering vocals, injecting emotion into a composition — for me, that wasn't “the pain of creation”. It was just pain.

But one thing was fun: playing with synthesizers.

Not Composition — Sound Design

Waveforms coming out of an oscillator. Passing through a filter. An LFO modulating the cutoff. An ADSR envelope shaping the amplitude. The process itself was what I loved. The feeling of assembling sound. Subtractive, additive — carving and stacking frequencies to create something from nothing. That pulled me in far more than finishing a track ever did.

Arturia - V Collection

I started studying Arturia's V Collection — nearly 30 virtual synthesizers. Two years in, I still hadn't gotten through them all. But a pattern emerged: every synthesizer was fundamentally a combination of the same few modules. Oscillators, filters, amplifiers, LFOs, envelopes. Understand those modules, and you're no longer limited to any single instrument. You can build your own sound.

Then I discovered Deadmau5.

The World of Modular Synthesis

Specifically, I watched his Masterclass. Seeing him plug cables into a massive modular system every Sunday, sculpting sound from voltage — something clicked hard. It felt like crawling inside a computer and watching electrons flow through circuits to create music. As someone who writes code for a living, this was irresistible.

VCV Rack

I found VCV Rack — a virtual modular environment where you can patch Eurorack modules in software. The sheer variety of modules. The fact that your choice of modules fundamentally changes the music. And honestly — there's something undeniably beautiful about glowing panels connected by colorful cables.

I wanted to become a patcher. I wanted to build my own modular system and create techno with it.

The Engineer's Approach

This is where my day job enters the picture.

I started reading the source code of Cardinal, the open-source fork of VCV Rack. C++ and DSP. Unfamiliar territory, but not entirely foreign — I'm a backend engineer. Digging into the internals of systems is literally what I do. The only difference: this time, audio signals flow through the pipeline instead of HTTP requests.

Then AI coding tools exploded — and for someone with limited time and unlimited ambition, everything changed. I'll write more about how AI reshaped my entire development workflow in the next issue. For now, the short version: I started building a modular synthesizer from scratch in Rust and WebAssembly. A synth that runs in a browser. Shareable via URL. That's PatchBee.

At the same time, another question emerged: can AI learn to patch?

Services like Suno and Udio generate songs with AI. But that's not what I wanted. I wanted AI that actually operates a synthesizer. Choosing oscillators, connecting filters, routing LFOs, setting envelopes — creating the sound itself. Not a track. A timbre. Not a preset. A patch.

That's DeepSynth. As I learned patching through Omri Cohen's tutorials, DeepSynth learned alongside me. A study partner in code form.

Convergence

Looking back, every detour — DAW courses, Serum, Arturia, music theory, drum design, hardware purchases — was converging toward a single point.

I'm not a songwriter. I'm an engineer who builds sound. And applying AI to that engineering is the most natural thing I can do.

Call it the gradient descent of life. I wandered across the loss landscape, trying every direction. But the gradient was pointing here all along: AI + modular synthesis. Building sound with code, exploring patches with AI, and documenting the entire process.

This newsletter is that documentation. Just as backpropagation traces gradients backward through a neural network, I'm tracing my own trial and error backward to find the optimal path.

Welcome to backpropagated.

Matthew deChang — An engineer who builds sound. AI modular synthesis artist.

Keep reading