by Roritharr on 10/30/2024, 5:53:02 PM
by refulgentis on 10/30/2024, 6:36:42 PM
Feedback:
1. Description* reeks of AI slop; it extended a surface-level prompt into longer surface-level insights. *: description as in GitHub README
2. #1 creates a situation where I go through reading this long thing, and realize it has no answers to even the first-level questions that would be on anyones mind (what model? where is it run?). For this to become something I'll take the time to integrate into my core workflow and try, it has to be *much* more transparent.
3. Claims in the description are ~impossible.
3b. Up front, I feel your pain, there's a hard set of constraints to navigate here given A) marketing needs to be concise B) people play fast and loose with conciseness vs. accuracy C) you need to sounds as good as the people in B.
3c. That being said, we're crossing into year 3 of post-ChatGPT. People, especially in your target audience, will know when they're reading* that you're reframing "I give text to the LLM which can theoratically do $X" into features, and users expect features to be designed* and intentional. If they are, you should definitely highlight that to differentiate from people who just throw it into the LLM.
3d. Analyzes your entire repository context: impossible, literally, unless you're feeding it to Gemini only. I have about 20KLOC and its multiples of Llama context size.
3e. "Understands code relationships and dependencies" see 3c.
3f. "Contextual Analysis: Reviews code changes within the full repository context": see 3d.
3g. "Language Agnostic: Supports all major programming languages.": see 3c (is there actual work done to do this, or is this just "well, given I just send the text to the LLM, everything is supported"?)
4. nit: Should be "Show HN: LlamaPReview, AI Github PR Reviewer That Learns Your Codebase"
by Zondartul on 10/30/2024, 7:13:46 PM
By "learns" do you mean "just shove the entire codebase into the context window", or does actual training-on-my-data take place?
by sksxihve on 10/30/2024, 7:30:13 PM
Are people really willing to commit code that was only reviewed by an AI? I personally wouldn't trust that for anything that is customer/revenue impacting. Obvious bugs and defects aren't all that hard to catch in normal code reviews but subtle race conditions/deadlocks/memory errors can be very tricky, do you have examples where it shows it can catch those?
by nikolayasdf123 on 10/30/2024, 6:32:30 PM
from your Privacy Policy, you straight up collecting users code. do you send it to someone else as well?
might make sense for open source. closed source is no go for this.
by GavCo on 10/30/2024, 7:47:06 PM
This reminds me of the PR Agent open source tool: https://github.com/Codium-ai/pr-agent
I've found the code walkthroughs very useful
by Squeeze2664 on 10/30/2024, 6:21:39 PM
A name like llama-pr-review might help with searching for this thing. Preview being an actual word and all.
by agilob on 10/30/2024, 7:24:55 PM
Description says:
> Unlimited AI-powered PR reviews
FAQ says:
> A: Yes, we currently offer a free tier with usage limits. You can install and use LlamaPReview without binding any payment method.
Only "free tier" is available.
by lukasb on 10/30/2024, 8:02:48 PM
I have a simple script I run before merging into the main branch that just tells Claude to look for obvious bugs, and to err on the side of saying it looks fine. Has stopped me from merging two or three bugs, 95% of the time it says things look fine so hasn't wasted my time.
by skybrian on 10/30/2024, 6:52:44 PM
I’m wondering if code review is the right place to give advice; it seems like the process is meant for human reviews (Where there is latency) and pair programming might be a better metaphor for what AI should be doing? Earlier feedback is often better.
We sort of have that with errors and warnings, where an IDE’s UI collects them into a todo list. The trouble is, the list isn’t necessarily prioritized very well.
On the other hand, asking for a review whenever you like is easy to control, versus being interrupted.
With all the AI tools floating around, it seems like user testimonials are going to be important for learning what’s worth trying out.
by coef2 on 10/30/2024, 6:47:53 PM
I have a conundrum about this. If an LLM can learn our codebase and generate reasonable reviews, does this imply it could perform the work independently without us? Perhaps generating code and conducting code reviews are distinct tasks. Another related question is: for complex tasks that generative AI can't solve, could this service still provide somewhat meaningful reviews? Maybe it could be partially useful for certain subtasks like catching off-by-one errors.
by smcleod on 10/30/2024, 7:38:45 PM
Hello. A few questions:
- Where is the source code? This is critical for it to be inspected before adding to any repos. - What models are you using? - Where are the models running? - When you say it learns from your codebase is it building a RAG or similar database or are you fine tuning from other people's code?
by jraph on 10/30/2024, 6:10:03 PM
(Show HN)
by rplnt on 10/30/2024, 8:43:38 PM
Any examples on actual PRs in public repos?
by madduci on 10/31/2024, 9:01:55 AM
Where can I see the code for this?
by iosguyryan on 11/1/2024, 5:49:54 AM
Make it local and slow.
by mistrial9 on 10/30/2024, 6:01:11 PM
oh right - some one-way relationship with a corporate-or-worse software process that makes a record of all progress, with timestamps and topics.. what could go wrong?
Where's the AI Running? Where are you sending the code? Are you keeping some of it?
I hate to be the compliance guy, but even from a startup perspective you'd at least want to mention what you promise to do here.