I think it depends on the situation.
Unit test can be done by Claude code. I use it everyday.
For E2E testing, browser-based tools are already pretty convenient. AI could definitely help by suggesting UX improvements, but setting up a smooth workflow is still tricky. You’d need to figure out things like where to put the AI’s feedback, when it should kick off testing, and who’s going to sort through all the suggestions it generates.
But technically it can be useful and quality is good enough.
It works without AI, but there's a MCP and stuff, so you should be able to connect Claude etc with your emulator/device now.
duxup 2 days ago [-]
I'm going to throw out my own ignorant theory.
AIs that I find useful are still just LLMs and LLMs power comes from having a massive amount of text to work with to string together word math and come up with something ok. That's a lot of data that comes together to get things ... kinda right... sometimes.
I don't think there's that data set for "use an app" yet.
We've seen from "AI plays games" efforts that there have been some pretty spectacular failures. It seems like "use app" is a different problem.
cheevly 2 days ago [-]
LLMs have literally won Pokemon. Im pretty sure that using an app is 10x simpler.
Vilian 2 days ago [-]
A lot simpler to run pokemon than test an app, the game play by itself sometimes
Nerd_Nest 16 hours ago [-]
I’m still torn on this. On one hand, memory could make ChatGPT more useful, especially for people using it regularly for work or coding. But on the other hand, the idea that it “remembers” me just feels a little uncomfortable.
I’d want more control over what’s remembered and when. Curious if anyone here has used this yet — is it actually helpful in practice?
logic_node 16 hours ago [-]
I’ve been trying it out recently, mostly for writing and summarizing research. The memory feels subtle so far — it doesn’t jump in unless you really build on past prompts.
That said, I totally agree about control. I wish there was a more obvious way to “pause” or “reset” memory mid-session instead of diving into settings. It’s useful, but still a little opaque.
drakonka 2 days ago [-]
They are; we're working on agents for web application testing over at qa.tech.
2 days ago [-]
afrederico 2 days ago [-]
They should totally be able to. If there's "vibe coding" there should be "vibe testing." We're working on just such a product (https://actory.ai); right now it only does websites but just imagine when we turn it on mobile/apps, etc. How cool would that be?
aristofun 2 days ago [-]
Because for meaningful tests of an app (assuming b2c or b2b for end users) you are supposed to be or imitate a human being.
Current AI is not even designed to do that. It is just a very sophisticated auto-complete.
It is sophisticated enough to fool some VCs that you can chop your round peg into square hole. But there is no ground to expect a scalable solution.
gametorch 1 days ago [-]
Eh, I disagree. Lots of valuable open source code purely written by AI has already been shipped.
aristofun 1 days ago [-]
Give me 1 decent example of code "purely" written by AI
Thousands of users. 40+ GitHub stars. Original draft took 30 minutes. Added numerous feature requests and each took like 5 minutes a pop.
I never wrote a single line of that code.
Furthermore, my startup, https://gametorch.app/ has 110 sign ups, paying users, millions of impressions. Never wrote any of that code either. Typing it out at ~100 wpm is far too slow.
HeyLaughingBoy 2 days ago [-]
Anecdotally, I know someone who tried to have ChatGPT generate unit tests and it was an abject failure.
cheevly 2 days ago [-]
I know someone that generated unit tests successfully.
whoknowsidont 2 days ago [-]
And I know exactly which one of these is an enterprise B2B app/platform.
haiku2077 1 days ago [-]
I generate tests with Claude almost every day.
owebmaster 1 days ago [-]
I have generated unit tests successfully, how did the someone you know failed?
gametorch 1 days ago [-]
I generated tons of valuable code with a bunch of GitHub stars, paying users, hundreds of signups, millions of impressions. Just chipping in my anecdote.
bravesoul2 19 hours ago [-]
That's a good idea. You are on to something
v5v3 2 days ago [-]
Are llm testers doing anything traditional scripts with for loops can't?
postalrat 2 days ago [-]
llm testers have for loops so they can do everything traditional scripts with for loops can plus more.
Rendered at 09:17:50 GMT+0000 (Coordinated Universal Time) with Vercel.
https://engineering.fb.com/2018/05/02/developer-tools/sapien...
It works without AI, but there's a MCP and stuff, so you should be able to connect Claude etc with your emulator/device now.
AIs that I find useful are still just LLMs and LLMs power comes from having a massive amount of text to work with to string together word math and come up with something ok. That's a lot of data that comes together to get things ... kinda right... sometimes.
I don't think there's that data set for "use an app" yet.
We've seen from "AI plays games" efforts that there have been some pretty spectacular failures. It seems like "use app" is a different problem.
I’d want more control over what’s remembered and when. Curious if anyone here has used this yet — is it actually helpful in practice?
That said, I totally agree about control. I wish there was a more obvious way to “pause” or “reset” memory mid-session instead of diving into settings. It’s useful, but still a little opaque.
Current AI is not even designed to do that. It is just a very sophisticated auto-complete.
It is sophisticated enough to fool some VCs that you can chop your round peg into square hole. But there is no ground to expect a scalable solution.
Thousands of users. 40+ GitHub stars. Original draft took 30 minutes. Added numerous feature requests and each took like 5 minutes a pop.
I never wrote a single line of that code.
Furthermore, my startup, https://gametorch.app/ has 110 sign ups, paying users, millions of impressions. Never wrote any of that code either. Typing it out at ~100 wpm is far too slow.