I really liked the README, that was a good use of AI.
If you're interested in the idea of writing a database, I recommend you checkout https://github.com/thomasjungblut/go-sstables which includes sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log.
Also https://github.com/BurntSushi/fst which has a great Blog post explaining it's compression (and been ported to Go) which is really helpful for autocomplete/typeahead when recommending searches to users or doing spelling correction for search inputs.
I put Overview section from the Readme into an AI content detector and it says 92% AI. Some comment blocks inside codebase are rated as 100% AI generated.
That said, totally fair read on the comments. Curious if they helped/landed the way I intended. or if a multi-part blog series would’ve worked better :)
Would love to hear how this compares to another popular go based full text search engine (with a not too dissimilar name) https://github.com/blevesearch/bleve?
This is very cool! Your readme is intersting and well written - I didn't know I could be so interested in the internals of a full text search engine :)
What was the motivation to kick this project off? Learning or are you using it somehow?
Mostly wanted a refresher on GPU accelerated indexes and Vector DB internals. And maybe along the way, build an easy on-ramp for folks who want to understand how these work under the hood
I see you are using a positional index rather than doing bi-word matching to support positional queries.
Positional indexes can be a lot larger than non-positional. What is the ratio of the size of all documents to the size of the positional inverted index?
Well bi-word matching requires that you still have all of the documents stored to verify the full phrase occurs in the document rather than just the bi-words. So it isn't always better.
For example the phrase query "United States of America" doesn't occur in the document "The United States is named after states of the North American continent. The capital of America is Washington DC". But "United States", "states of" and "of America" all appear in it.
There's a tradeoff because we still have to fetch the full document text (or some positional structure) for the filtered-down candidate documents containing all of the bi-word pairs. So it requires a second stage of disk I/O. But as I understand most practitioners assume you can get away with less IOPS vs positional index since that info only has to fetched for a much smaller filtered-down candidate set rather than for the whole posting list.
But that's why I was curious about the storage ratio of your positional index.
I really liked the README, that was a good use of AI.
If you're interested in the idea of writing a database, I recommend you checkout https://github.com/thomasjungblut/go-sstables which includes sstables, a skiplist, a recordio format and other database building blocks like a write-ahead log.
Also https://github.com/BurntSushi/fst which has a great Blog post explaining it's compression (and been ported to Go) which is really helpful for autocomplete/typeahead when recommending searches to users or doing spelling correction for search inputs.
>>I wrote a full text search engine in Go
>I really liked the README, that was a good use of AI.
Human intelligences, please start saying:
(A)I wrote a $something in $language.
Give credit where is due. AIs have feelings too.
> AIs have feelings too
Ohh boi, that’s exactly how the movie "Her" started! XD
tysm, i love this, FST is vv cool
Did you vibe code this? A few things here and there are a bit of a giveaway imho.
On my way to make a Dexter meme on this
When you think OP vibe-coded the project but can’t prove it yet
https://x.com/FG_Artist/status/1974267168855392371
I put Overview section from the Readme into an AI content detector and it says 92% AI. Some comment blocks inside codebase are rated as 100% AI generated.
> comment blocks inside codebase
Is vibe-commented a thing yet? :D
Wanted to give fellow readers a good on-ramp for understanding the FTS internals. Figured leaning into readability wouldn’t hurt
For me this makes the structure super easy to grok at a glance
https://github.com/wizenheimer/blaze/blob/27d6f9b3cd228f5865...
That said, totally fair read on the comments. Curious if they helped/landed the way I intended. or if a multi-part blog series would’ve worked better :)
Claude: "You're absolutely right" :D
Another possible tell (not saying this is vibe coded) is when every function is documented, almost too much comments
Ohh, I thought that inline comments would make it grokkable and be a low-friction way in. Seems this didn’t land the way I intended :'
Should a multi-part blog would've been better?
I like it, I comment exactly like you do. Comments are free, storage is plentiful, why not add comments everywhere?!
What makes you think so?
Probably the commit history.
Yayiee, the “cant prove it” Doakes Dexter meme, making it to HN
Would love to hear how this compares to another popular go based full text search engine (with a not too dissimilar name) https://github.com/blevesearch/bleve?
Bleve is an absolute beast! built with <3 at Couchbase Fun fact: the folks who maintain it sit right across from me at work
Shameless plug, you may wish to do Lucene-style tokenizing using the Unicode standard: https://github.com/clipperhouse/uax29/tree/master/words
Got to admit, initial impressions, this is pretty neat, would spend sometime with this. Thanks for the link :)
This is very cool! Your readme is intersting and well written - I didn't know I could be so interested in the internals of a full text search engine :)
What was the motivation to kick this project off? Learning or are you using it somehow?
I’m learning the internals of FTS engines while building a vector database from scratch. Needed a solid FTS index, so I built one myself :)
It ended up being a clean, reusable component, so I decided to carve it out into a standalone project
The README is mostly notes from my Notion pages, glad you found it interesting!
What are you building a vector database from scratch for?
Mostly wanted a refresher on GPU accelerated indexes and Vector DB internals. And maybe along the way, build an easy on-ramp for folks who want to understand how these work under the hood
Great work! Would be interesting to see how it compares to Lucene performance-wise, e.g. with a benchmark like https://github.com/quickwit-oss/search-benchmark-game
Thanks! Honestly, given it's hacked together in a weekend not sure it’d measure up to Lucene/Bleve in any serious way.
I intended this to be an easy on-ramp for folks who want to get a feel for how FTS engines work under the hood :)
Not _that_ long ago Bleve was also hacked together over a few weekends.
I appreciate the technical depth of the readme, but I’m not sure it fits your easy on-ramp framing.
Keep going and keep sharing.
Cool project!
I see you are using a positional index rather than doing bi-word matching to support positional queries.
Positional indexes can be a lot larger than non-positional. What is the ratio of the size of all documents to the size of the positional inverted index?
Observation is spot on. Biword matching would definitely ease this. Stealing bi-word matching for a future iteration, tysm :D
Well bi-word matching requires that you still have all of the documents stored to verify the full phrase occurs in the document rather than just the bi-words. So it isn't always better.
For example the phrase query "United States of America" doesn't occur in the document "The United States is named after states of the North American continent. The capital of America is Washington DC". But "United States", "states of" and "of America" all appear in it.
There's a tradeoff because we still have to fetch the full document text (or some positional structure) for the filtered-down candidate documents containing all of the bi-word pairs. So it requires a second stage of disk I/O. But as I understand most practitioners assume you can get away with less IOPS vs positional index since that info only has to fetched for a much smaller filtered-down candidate set rather than for the whole posting list.
But that's why I was curious about the storage ratio of your positional index.
looks great! would love to see benchmark with bleve and a lightweight vector implementation.
tysm, would try pairing it with HNSW and IVF, halfway through :)
Why did you create this new account if there's already 3 existing accounts promoting your stuff and only your stuff?
Because running a three-account bot‑net farm is fun :D Okay, jk, please don’t mod me out.
One’s for browsing HN at work, the other’s for home, and the third one has a username I'm not too fond of.
I’ll stick to this one :) I might have some karma on the older ones, but honestly, HN is just as fun from everywhere