Aw gawd no. Why do test framework authors repeatedly think this is a good idea? Everywhere I’ve worked that has embraced such a framework (often robot), the suites always eventually outgrow the capabilities of whatever DSL the framework provides.
Even with escape hatches into an established language, you turn the majority of test editing into second class citizen, because there is no chance that your ”test-IDE” will beat IntelliJ or VSCode. Developers don’t want to touch such test code and testers do the lazy thing and copy paste instead of building appropriate fixtures. Do you really want to relearn how to define a constant in flavor of the month test-dsl, vs just doing it the way you always do in TS?
When you see a yaml that resembles a list of ”steps”, it’s not declarative anymore, it’s a crippled imperative language in disguise.
Hi everyone, co-founder of Mobile.dev and co-author of Maestro here . Thanks jstan for sharing Maestro and for the kind words—really glad to hear it’s working well for you!
We built Maestro because E2E testing felt unnecessarily complicated, and we wanted something simple and powerful that anyone could use — whether you’re a seasoned developer or just getting started with automation. It’s been amazing to see it adopted at companies like Meta, Block, DoorDash, Stripe, and Disney, but honestly, what excites us most is seeing teams who’ve never done test automation before finally get a solid strategy in place because Maestro is so easy to use and get started with.
Oh, and if you’re wondering — yes, it works for web testing too!
We’re constantly iterating and adding features, so if you’ve got ideas, run into issues, or just want to chat, let me know. Always happy to hear how we can make it better.
Thanks again for checking it out, and happy testing!
We need to trigger external effects (eg on a USB-attached embedded device) and sometimes do stuff like forget a BLE device in system settings. We‘ve been looking into making a fake mouse that can achieve the latter. If you can support both use cases, you‘ve won.
That's what I'm doing with my new project, Valet. It's a Raspberry Pi configured to be a fake mouse, keyboard, and an Android (touch stylus). Works well on iOS and Android.
We extensively use Maestro in our testing setup. We test our Android (+ Android TV) and iOS apps on a couple of different emulators, and use it to take a bunch of screenshots to generate diff reports.
The only thing we don't like so far is that it is not extensible at all. And the AI direction is one that we absolutely don't care for either.
We built a huge wrapper script in python that allows us to spin up and control a wiremock server, as well as help us implement features that are not supported by maestro directly. We make calls to a local webserver (spawned by our wrapper) from the maestro test to do this, which works surprisingly well but it feels like we could perfectly leverage custom yaml commands or something like that.
We loved Maestro but we didn’t like the pricing and the team wanted something more predictable. We are using Moropo and it’s been great. Very affordable, good DX, and it’s all basically just Maestro with extras.
The advantages of using open source! It would be great if it became industry standard and more companies would offer it as a service.
Yep free and open-source to use. Plenty of folks run Maestro on directly on GitHub Actions, Bitrise, etc. Teams often run on our hosted cloud infra for parallelism and reliability when scaling up their testing, but that's totally up to you!
Hey tibbe - co-founder of Mobile.dev here. First off, totally get where you're coming from. We do offer a startup discount, but would love to dig in more to see if there's something we can work out. My co-founder and I would love to chat if you're open to it! Just shoot me a note if interested! leland@mobile.dev
I've used maestro in the past and liked it, very easy to get started and add decent coverage quickly.
Just wanted to share a project that I'm keeping a close eye on. I haven't actually used it yet but hoping to do so soon:
https://github.com/takahirom/arbigent
I found out about Maestro after I coming across Flashlight. I was looking for something that could effectively give me a performance score for my apps like Google Lighthouse, but for mobile apps. I found that in Flashlight. I found Maestro was relatively easy to pick up, like others have said before.
I'm a performance engineering consultant, but my apps are side projects, so I needed something that helps me do some quick performance testing. Maestro, and Flashlight, help me do that. It's early days, but I'm actually working on a separate product to use both Flashlight and Maestro to test on any number of real devices so I can get performance score trends across devices. Contact me if you're interested.
Looking forward to testing some of the updates with Maestro, especially web and iOS device support.
Installed and tried it for a sample Flutter app. So far looks too good to be true :) Super easy to start and tinker with. And surprisingly fast. Learning how to write real world tests with Flutter apps probably will have some learning curve, but that's expected.
Would be amazing to use it with Flutter desktop (macos at least) to avoid running iOS simulators.
I’ve been using Maestro for two very large Flutter apps and it’s been so ahead of every other option is not even funny.
No long compilation times, no half baked testing dev experience, supports iOS and Android, no pumpAndSettle BS, No Flutter hacks, multiple cloud providers (cloud.mobile.dev, moropo), you can interact with native elements, so you can work with push notifications, system dialogs, system settings, email clients, web views, browsers, and a very simple test definition files that every capable QA engineer can maintain with very little supervision from developers (no need for dart expertise for writing tests).
haha happy to hear that :) (I created Patrol) ((and then worked on Maestro at mobile.dev briefly))
I too think that for the vast majority of use cases, Maestro is the best solution. Fast and easy to write and run.
Also it's cool that it's open-source and has very strong community. I'd be skeptical to have all my tests stored in some SaaS that I can't even run locally (as some other solutions do)
Maestro being open source, being able to run it locally AND having two providers where we can just sign up and start running our tests was an important factor going with them.
We ended up going with Moropo as their pricing matched our needs better. Even if in the future we had issues with them, we could just go to a different provider is a big plus.
When testing mobile apps, how do you manage the data at the backend? i.e how do you ensure that data that you see in the app is he same every time and actions during the test do not affect the data for the next test?
When testing the backend in frameworks such as Rails, this is taken care of by seed data and DB transactions.
Trying this out at work and so far it has been leagues better than other mobile automation tools. I have just gotten started but it has been encouraging.
I used Maestro for one year after we switched from Detox. It's awesome to start end-to-end tests with and definitely the most accessible. However, in the end, we had to switch to Appium. While it's great to get started quickly, I definitely wouldn't recommend it for a serious production system pipeline. We encountered several issues:
- When attempting to write logic using JavaScript, there were issues such as error stacks providing no useful information and a complete lack of console logging. The injection of variables and the custom fetch also made linting ineffective. At least Maestro now supports ES6 via GraalJS.
- Coordination of test flows is lacking. I wish I could retry each flow individually. Ultimately, I had to create a wrapper (director) around Maestro to provide things like recordings, retries on failing tests, and (relevant) JSON output for our CI. I also needed to write custom reporters for Slack and other integrations. While these are not a core need for a testing tool, when I switched to Appium + Webdriver, most of these tools were available out-of-the-box (though they come with their own issues as well).
- When we updated XCode or iOS to newer versions, things tended to break. We often had to freeze the pipeline versions and wait until fixes were released, regularly checking GitHub issues to see if updates became available.
- We also experienced strange, random test timeouts which started to be frequently enought to break even with retries. These happened even though we had frozen versions of Maestro, XCode, and iOS, so it wasn't Maestro's fault, but it was problematic enough that we decided to move away from it, because we couldn't isolate the root cause.
I definitely miss the simplicity of Maestro. Appium takes more time to set up and comes with its own set of issues.
I still follow the project (and also the Flashlight.dev project, which uses Maestro for performance measurement), and looking forward to updates, which the team does constantly.
If you are a startup with no QAs I definetly would go with the Maestro route, but avoid it for a complex app pipeline use case, at least for now.
> "Appium takes more time to set up and comes with its own set of issues.
Hi, Appium project creator here. I'm working on something new (complementary to Appium) to address Appium set-up and other issues. If you ever wanted to chat, would love to hear more.
Really like the fact that it's easy to start doing something useful. I may end up using it for some screen scraping too. Puppeteer is powerful, but the scripts tends to be brittle.
the fact that it's open-source and nicely structured internally lets you "peel off" the topmost YAML layer and just use the underlying components to interact with the mobile device, using your JVM-compatible language of choice.
Awesome to hear! There's still tons we want to do on the Web side, so please let me know if there's anything you think should be added or improved there! Feel free to tag/DM me (@Leland) in our Slack community or email me leland@mobile.dev with questions/suggestions!
First of all, I didn't make this, let me be clear, and I don't work for the company Mobile.dev.
I've been looking for a replacement for Appium because the documentation for that site is absolutely garbage. Maestro boils everything down to YAML and runs its own test server so that you don't have to worry about connecting to the device drivers. It's missing an API but who needs an API when the CLI is so beautiful.
Does anyone know of anything on par with this that I should try? So far this has knocked my socks off.
Co-author of Maestro here - really appreciate that support jztan! If you get a chance you should also try out web support which we recently released! And always open to feedback, so please let me know if there's anything you think can be improved!
Hi Jztan, glad you're exploring this space! I'm the co-founder of MobileBoost, and I'd love to introduce our product, GPT Driver (https://www.mobileboost.io/).
We started two years ago with an AI-native approach, which is particularly useful for handling dynamic flows, hard-to-locate UI elements, and testing across multiple platforms and languages. Our main objective is to reduce test maintenance effort.
We offer:
a Web Studio – A no-setup-required platform with all tooling preconfigured.
SDKs – Directly integrate with existing test suites (Appium, XCUI, Espresso).
Yes, you can use our SDKs to run it locally on Simulators, Emulators, and real devices. We also support popular third-party device farms via the WebDriver protocol.
By default, the SDKs use our API endpoints, where we run a combination of models to maximize accuracy and reliability. This also enables us to provide logging with screenshots and reasoning to help with debugging.
That said, we're currently experimenting with a few customers who run our tooling against their own hosted models. While it's not publicly available yet, we might introduce that option going forward.
Would love to hear more about your use case, if a self-hosted setup is relevant or just the use of your own LLM tokens?
In my little experience, many big companies are heavily invested in Appium, which was the only viable solution x years ago, and keep clinging on to that.
Also Maestro may not be flexible/hackable enough for some of the things they do with Appium. But in the long term I think everyone would benefit if Maestro became the go-to UI testing tool, the way Docker became the go-to tool for containerization.
> The quality of most mobile apps sucks so not sure why mobile testing is not mainstream?
I think it's actually: mobile testing is not mainstream, so most mobile apps suck, haha.
btw, I think that most mobile apps actually make no sense at all (many are just stupid CRUDs), and should be web apps. But that'd be a whole different rant :)
Maestro is great. However it lacks so many important features you might need
For example, Maestro does not let you to coordinate multiple flow tests together. One test case I had is one phone initiating a call, and another answers it. Instead, Maestro prefers that every flow is self contained and will not run both in parallel reliably
I found many such limitations in its design only after writing a whole lot of their custom flow syntax
I've worked in this niche for a very long time, I seriously need to be able to use a normal programming language. A lot of test tools need to be a part of a larger workflow, if this was good ole NodeJS I could use some other tricks, for example intercepting network request,custom logic in JavaScript, etc.
The newer UI testing tools like mobileboost and QA buddy all support using vision language models and natural language to make testing easier. Do you plan to add support for that?
Main difference is that Maestro takes a reliable-by-default approach. We hear plenty of stories of folks exploring tools like the ones you mentioned, then ultimately coming back to Maestro due to reliability / reproducibility issues, which are non-negotiable when it comes to end to end testing
> declarative in yaml
Aw gawd no. Why do test framework authors repeatedly think this is a good idea? Everywhere I’ve worked that has embraced such a framework (often robot), the suites always eventually outgrow the capabilities of whatever DSL the framework provides.
Even with escape hatches into an established language, you turn the majority of test editing into second class citizen, because there is no chance that your ”test-IDE” will beat IntelliJ or VSCode. Developers don’t want to touch such test code and testers do the lazy thing and copy paste instead of building appropriate fixtures. Do you really want to relearn how to define a constant in flavor of the month test-dsl, vs just doing it the way you always do in TS?
When you see a yaml that resembles a list of ”steps”, it’s not declarative anymore, it’s a crippled imperative language in disguise.
Hi everyone, co-founder of Mobile.dev and co-author of Maestro here . Thanks jstan for sharing Maestro and for the kind words—really glad to hear it’s working well for you!
We built Maestro because E2E testing felt unnecessarily complicated, and we wanted something simple and powerful that anyone could use — whether you’re a seasoned developer or just getting started with automation. It’s been amazing to see it adopted at companies like Meta, Block, DoorDash, Stripe, and Disney, but honestly, what excites us most is seeing teams who’ve never done test automation before finally get a solid strategy in place because Maestro is so easy to use and get started with.
Oh, and if you’re wondering — yes, it works for web testing too!
We’re constantly iterating and adding features, so if you’ve got ideas, run into issues, or just want to chat, let me know. Always happy to hear how we can make it better.
Thanks again for checking it out, and happy testing!
Please add the ability to drive an actual iOS device instead of simulators.
In the works - stay tuned!
We need to trigger external effects (eg on a USB-attached embedded device) and sometimes do stuff like forget a BLE device in system settings. We‘ve been looking into making a fake mouse that can achieve the latter. If you can support both use cases, you‘ve won.
> fake mouse
That's what I'm doing with my new project, Valet. It's a Raspberry Pi configured to be a fake mouse, keyboard, and an Android (touch stylus). Works well on iOS and Android.
We extensively use Maestro in our testing setup. We test our Android (+ Android TV) and iOS apps on a couple of different emulators, and use it to take a bunch of screenshots to generate diff reports.
The only thing we don't like so far is that it is not extensible at all. And the AI direction is one that we absolutely don't care for either.
We built a huge wrapper script in python that allows us to spin up and control a wiremock server, as well as help us implement features that are not supported by maestro directly. We make calls to a local webserver (spawned by our wrapper) from the maestro test to do this, which works surprisingly well but it feels like we could perfectly leverage custom yaml commands or something like that.
We used to use Maestro but then they unfortunately decided to go all in on AI and hiked the price to match, making it no longer worthwhile for us.
We loved Maestro but we didn’t like the pricing and the team wanted something more predictable. We are using Moropo and it’s been great. Very affordable, good DX, and it’s all basically just Maestro with extras.
The advantages of using open source! It would be great if it became industry standard and more companies would offer it as a service.
Moropo sounds great, but unable to find open source repo / self-host docs. Would you be able to share the GitHub link for self-hosting Moropo? Thanks
Tom, co-founder of Moropo here, thanks for the mention :-)
Is the open-source version not usable/allowed for CI/CD environment?
Yep free and open-source to use. Plenty of folks run Maestro on directly on GitHub Actions, Bitrise, etc. Teams often run on our hosted cloud infra for parallelism and reliability when scaling up their testing, but that's totally up to you!
At SoFi we are using it with gitlab and AWS Linux/Mac runners. We can do parallel runs, data driven, get test logs, screen recordings etc
Hey tibbe - co-founder of Mobile.dev here. First off, totally get where you're coming from. We do offer a startup discount, but would love to dig in more to see if there's something we can work out. My co-founder and I would love to chat if you're open to it! Just shoot me a note if interested! leland@mobile.dev
Wait? You have to pay? Or you mean the cloud?
I've used maestro in the past and liked it, very easy to get started and add decent coverage quickly.
Just wanted to share a project that I'm keeping a close eye on. I haven't actually used it yet but hoping to do so soon: https://github.com/takahirom/arbigent
I found out about Maestro after I coming across Flashlight. I was looking for something that could effectively give me a performance score for my apps like Google Lighthouse, but for mobile apps. I found that in Flashlight. I found Maestro was relatively easy to pick up, like others have said before.
I'm a performance engineering consultant, but my apps are side projects, so I needed something that helps me do some quick performance testing. Maestro, and Flashlight, help me do that. It's early days, but I'm actually working on a separate product to use both Flashlight and Maestro to test on any number of real devices so I can get performance score trends across devices. Contact me if you're interested.
Looking forward to testing some of the updates with Maestro, especially web and iOS device support.
Installed and tried it for a sample Flutter app. So far looks too good to be true :) Super easy to start and tinker with. And surprisingly fast. Learning how to write real world tests with Flutter apps probably will have some learning curve, but that's expected.
Would be amazing to use it with Flutter desktop (macos at least) to avoid running iOS simulators.
I’ve been using Maestro for two very large Flutter apps and it’s been so ahead of every other option is not even funny.
No long compilation times, no half baked testing dev experience, supports iOS and Android, no pumpAndSettle BS, No Flutter hacks, multiple cloud providers (cloud.mobile.dev, moropo), you can interact with native elements, so you can work with push notifications, system dialogs, system settings, email clients, web views, browsers, and a very simple test definition files that every capable QA engineer can maintain with very little supervision from developers (no need for dart expertise for writing tests).
I can only recommend it.
Thanks for sharing your experience! Definitely gonna try on real projects.
I'm curious what other solutions you tried to test your Flutter app.
Vanilla Flutter tests, Honey, Patrol, these are the ones I remember.
And yes, amongst those Patrol was the best but at the time we decided, the Maestro experience was significantly better.
haha happy to hear that :) (I created Patrol) ((and then worked on Maestro at mobile.dev briefly))
I too think that for the vast majority of use cases, Maestro is the best solution. Fast and easy to write and run.
Also it's cool that it's open-source and has very strong community. I'd be skeptical to have all my tests stored in some SaaS that I can't even run locally (as some other solutions do)
Yes, I recognized your name!
Maestro being open source, being able to run it locally AND having two providers where we can just sign up and start running our tests was an important factor going with them.
We ended up going with Moropo as their pricing matched our needs better. Even if in the future we had issues with them, we could just go to a different provider is a big plus.
When testing mobile apps, how do you manage the data at the backend? i.e how do you ensure that data that you see in the app is he same every time and actions during the test do not affect the data for the next test?
When testing the backend in frameworks such as Rails, this is taken care of by seed data and DB transactions.
Trying this out at work and so far it has been leagues better than other mobile automation tools. I have just gotten started but it has been encouraging.
Awesome to hear! Feel free to tag me (@Leland) in our Slack community if you run into any issues! Slack invite: https://docsend.com/view/3r2sf8fvvcjxvbtk
Have you tried GPTdriver https://www.mobileboost.io/ for seamless natural language Gen AI mobile UI testing?
Is there a REPL?
I'm playing with appium today and installed a ruby appium repl called arc.
I find that it is helpful to explore the app using a repl, then convert the IDs and locators into real tests later.
Does maestro have a repl? Or, does maestro avoid the need for that somehow?
Yeah there is a repl, it is called maestro studio and opens a local web page you can inspect app elements and execute commands
I used Maestro for one year after we switched from Detox. It's awesome to start end-to-end tests with and definitely the most accessible. However, in the end, we had to switch to Appium. While it's great to get started quickly, I definitely wouldn't recommend it for a serious production system pipeline. We encountered several issues:
- When attempting to write logic using JavaScript, there were issues such as error stacks providing no useful information and a complete lack of console logging. The injection of variables and the custom fetch also made linting ineffective. At least Maestro now supports ES6 via GraalJS.
- Coordination of test flows is lacking. I wish I could retry each flow individually. Ultimately, I had to create a wrapper (director) around Maestro to provide things like recordings, retries on failing tests, and (relevant) JSON output for our CI. I also needed to write custom reporters for Slack and other integrations. While these are not a core need for a testing tool, when I switched to Appium + Webdriver, most of these tools were available out-of-the-box (though they come with their own issues as well).
- When we updated XCode or iOS to newer versions, things tended to break. We often had to freeze the pipeline versions and wait until fixes were released, regularly checking GitHub issues to see if updates became available.
- We also experienced strange, random test timeouts which started to be frequently enought to break even with retries. These happened even though we had frozen versions of Maestro, XCode, and iOS, so it wasn't Maestro's fault, but it was problematic enough that we decided to move away from it, because we couldn't isolate the root cause.
I definitely miss the simplicity of Maestro. Appium takes more time to set up and comes with its own set of issues.
I still follow the project (and also the Flashlight.dev project, which uses Maestro for performance measurement), and looking forward to updates, which the team does constantly.
If you are a startup with no QAs I definetly would go with the Maestro route, but avoid it for a complex app pipeline use case, at least for now.
Were you using Maestro Cloud or run it yourself? Could it be that if you use the cloud version some of the issues you've mentioned would be gone?
> "Appium takes more time to set up and comes with its own set of issues.
Hi, Appium project creator here. I'm working on something new (complementary to Appium) to address Appium set-up and other issues. If you ever wanted to chat, would love to hear more.
Installed and test-driven it...
Really like the fact that it's easy to start doing something useful. I may end up using it for some screen scraping too. Puppeteer is powerful, but the scripts tends to be brittle.
Keep up the good work!
the fact that it's open-source and nicely structured internally lets you "peel off" the topmost YAML layer and just use the underlying components to interact with the mobile device, using your JVM-compatible language of choice.
I recently made a lol-project at a hackathon exactly that way: https://devpost.com/software/hearthack
Awesome to hear! There's still tons we want to do on the Web side, so please let me know if there's anything you think should be added or improved there! Feel free to tag/DM me (@Leland) in our Slack community or email me leland@mobile.dev with questions/suggestions!
How does it compare to writing Appium tests?
There was a good thread on reddit a while back on this topic. Post was simply a request for opinions on Appium, but virtually everyone ended up recommending Maestro instead: https://www.reddit.com/r/QualityAssurance/comments/1771ca7/o...
Honestly, I'd just give it a try yourself - you can get started in minutes
Appium and Selenium project creator here. Just saying hi.
Cheers, huge Huggins fan here :) Looking forward to Valet
so am i! (it's been a crazy month, but that's not very obvious from the outside.)
First of all, I didn't make this, let me be clear, and I don't work for the company Mobile.dev.
I've been looking for a replacement for Appium because the documentation for that site is absolutely garbage. Maestro boils everything down to YAML and runs its own test server so that you don't have to worry about connecting to the device drivers. It's missing an API but who needs an API when the CLI is so beautiful.
Does anyone know of anything on par with this that I should try? So far this has knocked my socks off.
Co-author of Maestro here - really appreciate that support jztan! If you get a chance you should also try out web support which we recently released! And always open to feedback, so please let me know if there's anything you think can be improved!
What was the most garbage thing in your opinion re: Appium documentation?
[dead]
Hi Jztan, glad you're exploring this space! I'm the co-founder of MobileBoost, and I'd love to introduce our product, GPT Driver (https://www.mobileboost.io/).
We started two years ago with an AI-native approach, which is particularly useful for handling dynamic flows, hard-to-locate UI elements, and testing across multiple platforms and languages. Our main objective is to reduce test maintenance effort.
Duolingo recently shared their experience adopting our tooling: https://blog.duolingo.com/reduced-regression-testing/
We offer: a Web Studio – A no-setup-required platform with all tooling preconfigured. SDKs – Directly integrate with existing test suites (Appium, XCUI, Espresso).
Happy to answer any questions!
Can I run this locally?
Yes, you can use our SDKs to run it locally on Simulators, Emulators, and real devices. We also support popular third-party device farms via the WebDriver protocol.
Oh Okay this is cool, I thought it's not possible (it's not mentioned at your frontpage at all).
How do your AI-features work when running tests locally using yout SDK? Do I need to provide my own token to some LLM provider?
By default, the SDKs use our API endpoints, where we run a combination of models to maximize accuracy and reliability. This also enables us to provide logging with screenshots and reasoning to help with debugging.
That said, we're currently experimenting with a few customers who run our tooling against their own hosted models. While it's not publicly available yet, we might introduce that option going forward.
Would love to hear more about your use case, if a self-hosted setup is relevant or just the use of your own LLM tokens?
What is stopping adoption of such projects? The quality of most mobile apps sucks so not sure why mobile testing is not mainstream?
> What is stopping adoption of such projects?
In my little experience, many big companies are heavily invested in Appium, which was the only viable solution x years ago, and keep clinging on to that.
Also Maestro may not be flexible/hackable enough for some of the things they do with Appium. But in the long term I think everyone would benefit if Maestro became the go-to UI testing tool, the way Docker became the go-to tool for containerization.
> The quality of most mobile apps sucks so not sure why mobile testing is not mainstream?
I think it's actually: mobile testing is not mainstream, so most mobile apps suck, haha.
btw, I think that most mobile apps actually make no sense at all (many are just stupid CRUDs), and should be web apps. But that'd be a whole different rant :)
Maestro is great. However it lacks so many important features you might need
For example, Maestro does not let you to coordinate multiple flow tests together. One test case I had is one phone initiating a call, and another answers it. Instead, Maestro prefers that every flow is self contained and will not run both in parallel reliably
I found many such limitations in its design only after writing a whole lot of their custom flow syntax
Why Yaml ?
I've worked in this niche for a very long time, I seriously need to be able to use a normal programming language. A lot of test tools need to be a part of a larger workflow, if this was good ole NodeJS I could use some other tricks, for example intercepting network request,custom logic in JavaScript, etc.
We've found that YAML encourages maintainable testing practices. But when you need to fire off a network requests, add custom logic, Maestro does have javascript support: https://docs.maestro.dev/advanced/javascript/run-javascript
Very sophisticated companies like DoorDash and Kraken have written and maintain hundreds of Maestro tests using this approach!
Awesome, thank you for replying to me!
However, is it possible to use your testing framework as a library inside of a larger project.
worth mentioning it also supports web testing (in beta), so it's not exclusively for mobile testing!
https://docs.maestro.dev/platform-support/web-desktop-browse...
(disclosure: I currently work for mobile.dev, so if you have any feedback or questions, feel free to drop a reply and I'll try to answer)
Useful for running UI tests on Mobile App with good reviews.
I gather this is unrelated to the old CA/Unison project called Maestro or maybe Tivoli Workload Scheduler?
Woohoo Maestro is awesome!
The newer UI testing tools like mobileboost and QA buddy all support using vision language models and natural language to make testing easier. Do you plan to add support for that?
https://www.uber.com/en-US/blog/generative-ai-for-high-quali...
100% - here are some of Maestro's AI-powered commands:
* https://docs.maestro.dev/api-reference/commands/assertwithai
* https://docs.maestro.dev/api-reference/commands/extracttextw...
Main difference is that Maestro takes a reliable-by-default approach. We hear plenty of stories of folks exploring tools like the ones you mentioned, then ultimately coming back to Maestro due to reliability / reproducibility issues, which are non-negotiable when it comes to end to end testing
[dead]
[dead]
[dead]
Let us know how it goes! We've got a very active slack community if you run into any issues. Slack invite: https://docsend.com/view/3r2sf8fvvcjxvbtk