NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Learnings from building AI agents (cubic.dev)
Oras 27 minutes ago [-]
The problem is that, regardless of how you try to use "micro-agents " as a marketing term, LLMs are instructed to return a result.

They will always try to come up with something.

The example provided was a poor one. The comment from LLM was solid. Why would you comment out a step in the pipeline instead of just deleting it? I would comment the same in a PR.

singron 2 hours ago [-]
I think they skipped over a non-obvious motivating example too fast. On first glance, commenting out your CI test suite would be very bad to sneak into a random PR, and that review note might be justified.

I could imagine the situation might actually be more nuanced (e.g. adding new tests and some of them are commented out), but there isn't enough context to really determine that, and even in that case, it can be worth asking about commented out code in case the author left it that way by accident.

Aren't there plenty of more obvious nitpicks to highlight? A great nitpick example would be one where the model will also ask to reverse the resolution. E.g.

    final var items = List.copyOf(...);
    <-- Consider using an explicit type for the variable.

    final List items = List.copyOf(...);
    <-- Consider using var to avoid redundant type name.
This is clearly aggravating since it will always make review comments.
willsmith72 1 hours ago [-]
yep completely agreed, how can that be the best example they chose to use?

If I reviewed that PR, absolutely I'd question why you're commenting that out. There better be a very good reason, or even a link to a ticket with a clear deadline of when it can be cleaned up/reverted

h1fra 2 hours ago [-]
what I saw using 5-6 tools like this:

- PR description is never useful they barely summarize the file changes

- 90% of comments are wrong or irrelevant wether it's because it's missing context, missing tribal knowledge, missing code quality rules or wrongly interpret the code change

- 5-10% of the time it actually spots something

Not entirely sure it's worth the noise

bwfan123 2 hours ago [-]
code-reviews are not a good use-case for LLMs. here's why: LLMs shine in usecases when their output is not evaluated on accuracy - for example, recommendations, semantic-search, sample snippets, images of people riding horses etc. code-reviews require accuracy.

What is a useful agent in the context of code-reviews in a large codebase is a semantic search agent which adds a comment containing related issues or PRs from the past for more context to human reviewers. This is a recommendation and is not rated on accuracy.

asdev 34 minutes ago [-]
the code reviews can't be effective because the LLM does not have the tribal knowledge and product context of the change. it's just reading the code at face value
kurtis_reed 2 hours ago [-]
There was a blog post from another AI code review tool: "How to Make LLMs Shut Up"

https://news.ycombinator.com/item?id=42451968

mattas 1 hours ago [-]
"After extensive trial-and-error..."

IMO, this is the difference between building deterministic software and non-deterministic software (like an AI agent). It often boils down to randomly making tweaks and evaluating the outcome of those tweaks.

s1mplicissimus 1 hours ago [-]
Afaik alchemists had a more reliable method than ... whatever this state of affairs is ^^
snapcaster 60 minutes ago [-]
You're saying alchemy is better than the scientific method?
AndrewKemendo 1 hours ago [-]
Otherwise known as science

1:Observation 2:Hypothesis 3:test 4:GOTO:1

This is every thing ever built ever

What is the problem exactly?

nico 1 hours ago [-]
> 2.3 Specialized Micro-Agents Over Generalized Rules Initially, our instinct was to continuously add more rules into a single large prompt to handle edge cases

This has been my experience as well. However, it seems like the platforms like Cursor/Lovable/v0/et al are doing things differently

For example, this is Lovable’s leaked system prompt, 1550 lines: https://github.com/x1xhlol/system-prompts-and-models-of-ai-t...

Is there a trick to making gigantic system prompts work well?

shenberg 1 hours ago [-]
When I read "51% fewer false positives" followed immediately by "Median comments per pull request cut by half" it makes me wonder how many true positives they find. That's maybe unfair as my reference is automated tooling in the security world, where the true-positive/false-positive ratio is so bad that a 50% reduction in false positives is a drop in the bucket
jangletown 2 hours ago [-]
"51% fewer false positives", how were you measuring? is this an internal or benchmarking dataset?
nzach 2 hours ago [-]
I agree with the sentiment of this post. I my personal experience the usefulness of a LLM positively correlated with your ability to constrain the problem it should solve.

Prompts like 'Update this regex to match this new pattern' generally give better results than 'Fix this routing error in my server'.

Although this pattern seems true empirically, I've never seen any hard data to confirm this property(?). And this post is interesting but seems like a missed opportunity to back this idea with some numbers.

vinnymac 2 hours ago [-]
I’ve been testing this for the last few months, and it is now much quieter than before, and even more useful.
N_Lens 2 hours ago [-]
Very vague post light on details, and as usual, feels more like a marketing pitch for the website.
weego 2 hours ago [-]
It's recreating the monolith vs micro-service argument by proxy for a new generation to plan conference talks around.
flippyhead 2 hours ago [-]
I found it useful.
bumbledraven 2 hours ago [-]
What model were they using?
curiousgal 2 hours ago [-]
> Encouraged structured thinking by forcing the AI to justify its findings first, significantly reducing arbitrary conclusions.

Ah yes, because we know very well that the current generation of AI models reasons and draws conclusions based on logic and understanding... This is the true face palm.

nico 1 hours ago [-]
Humans work pretty much the same way

Several studies have shown that we first make the decision and then we reason about it to justify it

In that sense, we are not much more rational than an LLM

mosura 2 hours ago [-]
Lessons.
chanux 2 hours ago [-]
criddell 1 hours ago [-]
I don't like the word learnings either, but you write for your audience and this article was probably written with the hope that it would be shared on LinkedIn.

Learnings might be the right choice here.

I wouldn't complain if the HN headline mutator were to replace "Learnings" with "lessons".

flippyhead 2 hours ago [-]
This is LITERALLY mind blowing.
1 hours ago [-]
tempodox 2 hours ago [-]
[flagged]
stavros 2 hours ago [-]
[flagged]
1 hours ago [-]
3 hours ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:55:02 GMT+0000 (Coordinated Universal Time) with Vercel.