I can indeed see some UI patterns that are indicative of vibe coding. So what?
Did something like this exist before, with the same level of interactivity?
I certainly had not come across it. Should Show HN be exclusive to hand-crafted code that demonstrate software mastery? Where things are going, that would be a slippery slope.
I think we should be celebrating what is possible using this new generation of tools, and how the reduced barrier to entry will result in more creativity and experimentation. As for those who are asking for AI use disclosure, why stop there? Why not also ask for disclosure of the use of any libraries or templates that made implementing it a bit easier?
I would also like to add a personal perspective. Each academic and teacher has their own take on things, their own narrative which distinguishes them from the rest. And in most cases, this unique perspective has so far been expressed through a combination of spoken words, handouts, and slides.
Yet, when it came to interactive demonstrations and digital tools, we were at the mercy of wildly overpriced SaaS products, or dependent on TAs to implement some version of our vision. The homebrewed teaching aid that conveyed concepts exactly the way we wanted was simply out of reach, unless we were prepared to dedicate months of work, at the expense of other commitments.
You’re kind of talking to a straw man here. I didn’t read anything against the use of AI, just that they’d rather not spend time reviewing AI code. This is reasonable not to want to spend time reviewing something the author hasn’t spent that much time on. Maybe the preference would be to review AI code with AI, but we need to know in order to make that choice.
Every Show HN should come with an AI disclosure detailing exactly how much AI was used to create it. It's not that using AI is bad per se, but I don't want to be a human critiquing an AI's work, it's hard to respond if I don't know who/what built it.
We're considering how to improve the way Show HNs are evaluated and presented.
Lately we've had to think deeply about exactly what has changed about Show HNs in the era of AI-generated code, and one way of thinking about it is that code-generation has basically eaten everything that used to be interesting about most Show HN posts. I.e.: What were the obstacles to making it work? What approaches did you try that didn't work? What was the breakthrough that made it work? What do you learn?
So, we need a new way of evaluating the ways in which a project may be interesting to the HN audience, and in the way project creators convey that in their post. It will take time for new conventions to emerge, but we're doing what we can to help find them.
For now, please don't post comments like this. It arguably counts as snark, a swipe, curmudgeonliness, a generic tangent, or other breaches of the guidelines: https://news.ycombinator.com/newsguidelines.html.
If you think something is unfit for HN, please email us (hn@ycombinator.com) and we'll take a look.
It’s funny… my initial reaction to your comment was that it’s a bit persnickety to expect that. However, I’m coming around to agreeing. I recently spent a non-trivial amount of time responding to a PR into one of my projects. I did have a sense it was mostly AI, but the changes were reasonable with a bit of adjustment. Wrote some feedback and guidance for the first time contributor and bam, they closed the PR, haven’t heard back.
But yes, people generally do not review and comment on compiled code. If your source is written by AI, why is it a surprise people might be hesitant to spend their time reviewing what it produced?
You had to change the end because following it through actually made total sense. You kinda pulled a trick, no doubt to convince yourself if I’m being fair to you.
Did something like this exist before, with the same level of interactivity? I certainly had not come across it. Should Show HN be exclusive to hand-crafted code that demonstrate software mastery? Where things are going, that would be a slippery slope.
I think we should be celebrating what is possible using this new generation of tools, and how the reduced barrier to entry will result in more creativity and experimentation. As for those who are asking for AI use disclosure, why stop there? Why not also ask for disclosure of the use of any libraries or templates that made implementing it a bit easier?
I would also like to add a personal perspective. Each academic and teacher has their own take on things, their own narrative which distinguishes them from the rest. And in most cases, this unique perspective has so far been expressed through a combination of spoken words, handouts, and slides.
Yet, when it came to interactive demonstrations and digital tools, we were at the mercy of wildly overpriced SaaS products, or dependent on TAs to implement some version of our vision. The homebrewed teaching aid that conveyed concepts exactly the way we wanted was simply out of reach, unless we were prepared to dedicate months of work, at the expense of other commitments.
This is no longer the case.
https://railsback.org/PT.html#Popups
Lately we've had to think deeply about exactly what has changed about Show HNs in the era of AI-generated code, and one way of thinking about it is that code-generation has basically eaten everything that used to be interesting about most Show HN posts. I.e.: What were the obstacles to making it work? What approaches did you try that didn't work? What was the breakthrough that made it work? What do you learn?
So, we need a new way of evaluating the ways in which a project may be interesting to the HN audience, and in the way project creators convey that in their post. It will take time for new conventions to emerge, but we're doing what we can to help find them.
For now, please don't post comments like this. It arguably counts as snark, a swipe, curmudgeonliness, a generic tangent, or other breaches of the guidelines: https://news.ycombinator.com/newsguidelines.html.
If you think something is unfit for HN, please email us (hn@ycombinator.com) and we'll take a look.
I mean it's not that using compilers is bad, it's just that those who use them aren't real coders.
But yes, people generally do not review and comment on compiled code. If your source is written by AI, why is it a surprise people might be hesitant to spend their time reviewing what it produced?