- The cloudless architecture is intriguing. How do you handle tour synchronization when multiple devs are working on the same codebase?
- How do you handle tour updates when the underlying code changes? Auto-invalidation or manual refresh?
Hi! Thank you :) Great question and it's one that's full of assumptions so tell me where you think I'm wrong.
Tours are stored as a flat file (JSON). When you open a tour, you open it with the Agent and on, re-opening, it goes through re-validation (as is done during the tour). In the case where lines shift in files because the file includes start/end ranges and line text (verbatim), this will cause the tour to be invalidated and the Agent will need to rebuild it. There are a lot of "dead code" cases where this won't catch a tour update, today.
The tour file contains a ton of meta information, as well, so each time the Agent opens it -- it could rebuild the tour with the same intents. The original user request, synthesized request, success criteria, concepts that are going to be taught.
This is a first pass solution for now. If there's more interest, there's a much more interesting tech solution with managing pointers to references and invalidating the tour on a broader range of criteria.
Part of what I need to learn is:
- A. Are tours something that folks use as ongoing "interactive" documentation, e.g. a set of onboarding tours
- B. Are tours ephemeral that folks use to hash out how to approach a feature, bug or review a PR.
At the moment, it's more optimized for B as that's how I've been using it but A is a very cool use case because one could imagine opening a new repo and having N tours that you can just start up. I was considering reaching out to open source projects to create tour-files and request a PR after they confirm interest (not spam them) to test A.
Your thoughts are welcome and appreciated. Have a lovely Friday.
I see value in backfilling missing / obtuse documentation, i also see potentially negative value if used instead of improving or reading existing docs. Ideally Plotly would have a developer guide that is higher quality than what an LLM can dervive.
That's really insightful. I think the goal is faster comprehension for the developer. Whether it's a PR in an area of code you haven't seen in a while and you want a refresher, a bug that you have no clue where to start, or you're onboarding -- in all cases, you can't start without a mental model of what's going one.
I'm solving may be for folks like me who would prefer to have the guide IN CODE. It's a new experience to me as I've never used an in IDE code tour.
I think what I'm experiencing is it's way more helpful for me to have the guide navigate through code so I can explore a little at each step and get a lay of the land.
How people learn is personal, and what I hoped for when I was building this was more of an experience where I'm walking through with a teammate.
I considered trying to have a voice AI read or explain each step. I also considered allowing another person to drive your IDE through mirroring what they're looking at. Both were cut at the idea phase because it felt it was feature bloat on a concept that I didn't know anyone (but me) really wants
Open to any more ideas or feedback! Thank you so much for dropping in
Gave this a spin and this is really cool! Wish I could provide more in-depth feedback to help improve it but will certainly be keeping an eye on it! Nicely done!
Hi, I just didn't test with Co-pilot but it should work as I think it's using some flavor of GPT4 or 5, both of which were able to use the new tools. I've tested more with GPT5 through cursor.
Given we're all here to learn, I'll share more details on the testing matrix:
IDE: VSCODE, Cursor, Windsurf, Cline, Roo
Agents: Claude Opus/Sonnet/Haiku, OpenAI GPT-5/GPT-5-codex-[low|med|hi], etc...
Theme: Then when you hit UI, if you use themes there's at four that are very common including default light/dark modes.
Huh wow usually I’m pretty skeptical about stuff like this but your video demo looks pretty neat, I’m gonna try it on my our codebase at work today and see how it goes! We’ve moved fast and broken stuff lately and struggled a bit to come up with coherent contributing guidelines, in addition to onboarding new devs and guiding LLM codegen - it’d be cool if a tool like this could help elucidate the key things you need to know to work within our bespoke framework.
Ah, shucks! Thank you. Means a lot to me to have folks giving it a go. I recognize how much our attention is strained these days.
I'm trying to figure out whether folks really want this as more of a documentation tool for onboarding, or for "on-the-fly" transient digging: bugs/root cause, PR Review support, co-designing, or code style review across a section. The inline feedback is what's been clutch and sticky for me both within tours, and outside.
If you have feedback and install it or have issues you can email me at support at v1 d0t co --
If it's more of an idea, feature request, or potential bug you can ask your agent (once MCP is connected) to "Send feedback to intraview team" or something similar to be clear you don't want to add feedback for the Agent (which it can also do) and it'll send us a Slack message.
- The cloudless architecture is intriguing. How do you handle tour synchronization when multiple devs are working on the same codebase? - How do you handle tour updates when the underlying code changes? Auto-invalidation or manual refresh?
Thanks
Tours are stored as a flat file (JSON). When you open a tour, you open it with the Agent and on, re-opening, it goes through re-validation (as is done during the tour). In the case where lines shift in files because the file includes start/end ranges and line text (verbatim), this will cause the tour to be invalidated and the Agent will need to rebuild it. There are a lot of "dead code" cases where this won't catch a tour update, today.
The tour file contains a ton of meta information, as well, so each time the Agent opens it -- it could rebuild the tour with the same intents. The original user request, synthesized request, success criteria, concepts that are going to be taught.
This is a first pass solution for now. If there's more interest, there's a much more interesting tech solution with managing pointers to references and invalidating the tour on a broader range of criteria.
Part of what I need to learn is:
At the moment, it's more optimized for B as that's how I've been using it but A is a very cool use case because one could imagine opening a new repo and having N tours that you can just start up. I was considering reaching out to open source projects to create tour-files and request a PR after they confirm interest (not spam them) to test A.Your thoughts are welcome and appreciated. Have a lovely Friday.
Thanks to the folks who upvoted, happy to answer questions or discuss any ideas!
You're also welcome to PM me on LinkedIn: https://linkedin.com/in/cyrusradfar
thank you!
I'm solving may be for folks like me who would prefer to have the guide IN CODE. It's a new experience to me as I've never used an in IDE code tour.
I think what I'm experiencing is it's way more helpful for me to have the guide navigate through code so I can explore a little at each step and get a lay of the land.
How people learn is personal, and what I hoped for when I was building this was more of an experience where I'm walking through with a teammate.
I considered trying to have a voice AI read or explain each step. I also considered allowing another person to drive your IDE through mirroring what they're looking at. Both were cut at the idea phase because it felt it was feature bloat on a concept that I didn't know anyone (but me) really wants
Open to any more ideas or feedback! Thank you so much for dropping in
Given we're all here to learn, I'll share more details on the testing matrix:
IDE: VSCODE, Cursor, Windsurf, Cline, Roo
Agents: Claude Opus/Sonnet/Haiku, OpenAI GPT-5/GPT-5-codex-[low|med|hi], etc...
Theme: Then when you hit UI, if you use themes there's at four that are very common including default light/dark modes.
I'm trying to figure out whether folks really want this as more of a documentation tool for onboarding, or for "on-the-fly" transient digging: bugs/root cause, PR Review support, co-designing, or code style review across a section. The inline feedback is what's been clutch and sticky for me both within tours, and outside.
If you have feedback and install it or have issues you can email me at support at v1 d0t co --
If it's more of an idea, feature request, or potential bug you can ask your agent (once MCP is connected) to "Send feedback to intraview team" or something similar to be clear you don't want to add feedback for the Agent (which it can also do) and it'll send us a Slack message.