UNUSABLE.Ai

Filtered Regret Sequencer — Gemini Version

Feb 26, 2026 · prototype

Filtered Regret Sequencer — Gemini Version
GitHub Demo

Premise

After building the original Filtered Regret Sequencer with ChatGPT, I received a suggestion that Gemini might be stronger on frontend and interface fidelity. The experiment was therefore repeated using Gemini, with the same prompt, same constraints, and the same single‑file HTML requirement.

The goal was not to build a finished product, but to observe how well a different model could translate a dense DSP specification and a hand‑drawn interface sketch into a working browser instrument.


Constraint: single‑file HTML

Everything had to live inside one file:

This removes the model’s ability to distribute responsibility across files, forcing it to reason about structure, state, rendering, and signal flow simultaneously.


Early friction: generation reliability

Unlike the ChatGPT version, Gemini frequently failed to complete generation on the free tier. Attempts would run between 10 and 45 minutes before stopping without output.

Once it did produce output, it asked relevant technical clarification questions about filter topology, ratchet timing, and MIDI sync behavior. The understanding was clearly there. The bottleneck was simply getting the output to complete.


DSP implementation

From an audio perspective, Gemini performed well.

The sequencer produced:

The sonic result was comparable to the ChatGPT version. In some cases, filter response felt slightly more deliberate.


Iteration speed vs stability

Once generation worked, iteration speed was fast. Some revisions completed in under a minute.

However, revisions were less stable.

Fixing one parameter could silently break another:

This created a workflow closer to continuous refactoring than structured iteration.


Interface fidelity

Gemini stayed slightly closer to the original sketch than the ChatGPT version, but it still deviated.

It consistently reorganized layout decisions toward familiar implementation patterns, prioritizing technical clarity over strict visual fidelity.

DSP logic translated cleanly. Spatial intent did not.


Observations


Conclusion

While Gemini were more true to the interface sketch than ChatGPT…

It did not do all that was requested - making its own MVP and then adding on non requested functionality along the way and kept changing both interface and sound-engine as well as functionality leading to stuff not working anymore, change of naming in interface Q became Resonance, became Res, became Q again. Midi sync functionality worked one iteration, and was broken again on the next one…

There should be a big disclaimer though that I couldn’t use the heavier model, for whatever reason, so that might have produced better results. And all in all it is a shame, as the pace once it started working was wonderful, quick revisions in maximum a minute or two. And the generated sound engine had a really nice quality (choice of filter algo and decay curve I guess).

And to be sincere, the interface translation by Gemini compared to something done by the most junior designer - is still worse. So for custom interfaces, we still need those designers to make something that invites to usage.