Show HN: Open-sourcing our text-to-CAD app

github.com

136 points by zachdive 16 hours ago

Hey HN! I'm Zach from Adam (https://adam.new/). We’re building an AI co-pilot for mechanical CAD software.

As part of our broader research, we built a browser-based Text-to-CAD app (https://news.ycombinator.com/item?id=44182206) and are now open sourcing it. This is a React SPA with a Supabase backend.

What it does:

* Generates parametric 3D models from natural language descriptions, with support for both text prompts and image references

* Outputs OpenSCAD code with automatically extracted parameters that surface as interactive sliders for instant dimension tweaking

* Exports as .STL or .SCAD

Under the hood:

* Separate agents for conversation and code generation; simple parameter tweaks bypass AI entirely using deterministic regex-based updates

* Runs fully in-browser by compiling OpenSCAD to WebAssembly and integrating Three.js with React Three Fiber for 3D rendering

* Supports BOSL, BOSL2, MCAD libraries and custom font support (Geist) for text in models

We’ve seen many developers trying to replicate this kind of functionality, so we’re releasing this to give the community a solid foundation to build on.

Future improvements:

* Expand geometry support - Move beyond CSG primitives to support curved surfaces, fillets, lofts, and constraint-driven modeling through CadQuery/Build123D

* Better spatial context - UI for face/edge selection and viewport image integration to give LLMs spatial understanding

* Enhanced capabilities - RAG on documentation and integration with more OpenSCAD libraries for features like proper threading

You can clone the repo and run it locally! Contributions are welcome, and we’ll keep merging PRs as they come in.

alnwlsn 4 hours ago

I continue to be skeptical of text to CAD, because to make it work, you're going to be doing a whole lot of this [0]. This is an extremely high bar to clear to be better than even the most basic of CAD skills.

I'd wager that for most of the CAD I work on, I would not be able to accurately describe what I want in natural language. If you've been able to, please share examples!

[0] - https://www.nasa.gov/history/afj/ap13fj/15day4-mailbox.html

tarasglek 2 hours ago

It would be killer to be able to integrate this with 3d scans. Eg "make me a mount that hugs this shape" where you can draw both render the scan and mark it with a 2d paint tool

mclau157 14 hours ago

Readme could really use some pictures

bilsbie 13 hours ago

Cool project. AI is surprisingly good with openscad. I wonder what a custom trained model could do?

  • gerdesj 11 hours ago

    I've had mixed results with Chat n OpenSCAD. For my first effort it used BOSL (good skills) but hallucinated functions (bad skills)! It also failed to allow for closing the geometry properly ie adding a tiny epsilon to get unions and diffs etc to join up.

    The model was really simple - a threaded "back nut" - basically a hollow thin walled cylinder with a base with a hole in it. The cylinder is threaded on the inside. Its a plumbing part for a long out of production system that still works fine but its leaking and I broke the current nut trying to tighten it. Once I dissembled the joint it turns out it does not need to be tight just stable. It only serves to hold a tube with two O rings in place inside the water inlet to the device and a standard plumbing nut and olive job on the other end of the short tube. A perfect job for 3D printing. It took me six iterations to get the thread right. At one point I miss-read my calipers sigh

    I'd love to see what RAG will do for this with a well focused model. There is a lot of decent documentation for OpenSCAD and a lot of literature for this form of modelling.

  • zachdive 11 hours ago

    yes we did a bunch of experiments on this! we could get OS models up to but not beyond any of the best closed foundation models. Gemini 2.5/Claude 4 most reliable as an api option

jstanley 14 hours ago

> CADAM uses ngrok to send image URLs to Anthropic

FYI you can send base64 encoded PNGs, no need to mess about with ngrok.

  • adenta 14 hours ago

    Yeah, the idea sounds magic, but there’s a lot of weird things going on in this application

    I feel like a standalone script would be much more helpful

  • simlevesque 9 hours ago

    That'll use way more tokens.

    • jacobr1 8 hours ago

      No, the tokens are evaluated on the image itself, once downloaded and resized. The same image sent as an encoded text payload and as url will cost the same. Now, if you are sending the content the model as a text payload, then that would cost text tokens, but that doesn't work very well for non-trivial images.

renecito 11 hours ago

do you have examples or any getting started guide to know what can I expect ?

  • zachdive 11 hours ago

    what sorts of examples would be helpful? will add screenshots to the readme plus anything else you'd like.

    you can also check out the hosted version to see what to expect

    • yardshop 6 hours ago

      It would be helpful to have some examples that show the prompts needed to develop simple shapes, then how to iterate to add improvements. A video of you using it to create something specific would be great.

      I first tried "a work table with a roof" which gave me a reasonable model but with a flat roof, then I tried "a work table with a pitched roof" which gave me a very unlikely and unworkable model with the halves of the roof disconnected and not contacting the vertical supports. Then I tried the "Adam Pro" option and it came out looking more like an Adirondack chair than a table, but not one you could sit in! =)

      I would like to know what to write instead to get a more useful model. Very cool project though!

bzmrgonz 7 hours ago

Have you seen what google's nano banana can do?