Post
147
We've been building Remyx to help AI teams track what's actually working across their AI development efforts.
Every experiment you and your team runs, from where the approach came from, through implementation and testing, to whether it moved the metric you care about is tracked in one place. Over time, Remyx spots patterns across your experiments and recommends new approaches worth testing based on what's proven to work.
It connects with the tools you already use (GitHub, Linear, Claude Code, HuggingFace) so experiment context doesn't get lost across six different places.
Full demo vid here: https://youtu.be/XscVmkxTACA
The free dev version is live at https://remyx.ai!
We're looking for feedback from teams actively developing AI applications. If you give it a look, would love to hear what's missing or what would make it more useful for your workflow.
Every experiment you and your team runs, from where the approach came from, through implementation and testing, to whether it moved the metric you care about is tracked in one place. Over time, Remyx spots patterns across your experiments and recommends new approaches worth testing based on what's proven to work.
It connects with the tools you already use (GitHub, Linear, Claude Code, HuggingFace) so experiment context doesn't get lost across six different places.
Full demo vid here: https://youtu.be/XscVmkxTACA
The free dev version is live at https://remyx.ai!
We're looking for feedback from teams actively developing AI applications. If you give it a look, would love to hear what's missing or what would make it more useful for your workflow.