This solves a massive headache. The drift between externally generated tests and an active codebase is a brutal problem to maintain.
Using vision-based execution instead of brittle XPaths is a great baseline, but moving the test definitions to live directly alongside the repo context is definitely the real win here.
Did you find that generating the YAML from the codebase context entirely eliminated the "stale test" issue, or do developers still need to manually tweak the generated YAML when mobile UI layouts change drastically? Great project!
Hi Avikaa, finalrun provides skills that you can integrate with any IDE of your choice. You can just ask the finalrun-generate-test skill to update all the test for your new feature.
> The shift for me was realizing test generation shouldn’t be a one-off step. Tests need to live alongside the codebase so they stay in sync and have more context.
Does the actual test code generated by the agent get persisted to project?
If not, you have kicked the proverbial can down the road.
Yes gavinray, It gets persisted to the project. Its lives alongside the codebase. So that any test generated has the best context of what is being shipped. which makes the AI models use the best context to test any feature more accurately and consistently.
Hey, thats true, verification of ai generated code needs proof with video of each action, console logs and network logs. Would love to know how you are solving for web. would be a great learning for me too.
This solves a massive headache. The drift between externally generated tests and an active codebase is a brutal problem to maintain.
Using vision-based execution instead of brittle XPaths is a great baseline, but moving the test definitions to live directly alongside the repo context is definitely the real win here.
Did you find that generating the YAML from the codebase context entirely eliminated the "stale test" issue, or do developers still need to manually tweak the generated YAML when mobile UI layouts change drastically? Great project!
Hi Avikaa, finalrun provides skills that you can integrate with any IDE of your choice. You can just ask the finalrun-generate-test skill to update all the test for your new feature.
If not, you have kicked the proverbial can down the road.
Yes gavinray, It gets persisted to the project. Its lives alongside the codebase. So that any test generated has the best context of what is being shipped. which makes the AI models use the best context to test any feature more accurately and consistently.
Just updated README.md, it's lot simpler and addresses on the core. Thanks for the feedback, please checkout
Verification of AI generated code right would be dope.
We do something similar in our company for web with playwright but facing a lot of flaky tests.
Will check this out
Hey, thats true, verification of ai generated code needs proof with video of each action, console logs and network logs. Would love to know how you are solving for web. would be a great learning for me too.
I just ran my first test. Thanks team :)
Do share us feedback.
Looks pretty cool. How does your agent understand plain english?
We have built a QA agent that can understand your plain english intent and uses vision to reason and navigate the app to test your intent. You can check our benchmark here https://finalrun.app/benchmark/ and how we architected our agent for the benchmark https://github.com/final-run/finalrun-android-world-benchmar.... Its all open source
Agentic testing. Kudos to your decision to open-source it!