Skip to main content


>accidentally hit unfollow on j
>instantly disappears from my feed competely
>pleroma search is useless, returns nothing when I search for @j
>try to hit remote follow on bae st, gives error
the fediverse user experience
@Joel
in reply to Den Datafag Trollmann :flag:

Also vast majority of admins not having enough RAM to store the search index, and not knowing they need it because they've never dealt with actual database load before.
in reply to Haelwenn /элвэн/ :triskell:

@lanodan @hj yeah but even then there could be improvment. Like list accounts you're following first. Grabbing the list of following for autocomplete after typing @ would cover 85+% of @ and take load off the database and return results much faster.
in reply to Joel

@lanodan it sorta already does that on clientside but you might also want autocomplete someone who isn't on your following list
in reply to Den Datafag Trollmann :flag:

@hj @lanodan well sure but the accounts you're following should be first. Trying to mention @p shouldn't take nearly as long as it does.
in reply to Joel

@p @lanodan IIRC it at least used to be client does its own search first and waits for backend to respond and then updates autocomplete with backend response, but it causes jumpy behavior, and additionally we don't (didn't?) have proper sorting by "last active" so order is a mess.

It could be done and improved but it's gonna take some doing

in reply to Den Datafag Trollmann :flag:

@hj @p @lanodan yeah
What about indexing? Is there a better way to index it? IIRC pleroma uses a lot of custom functions and stuff but doesn't have any cost values assigned.
in reply to Joel

@hj @lanodan Well, at least trivially, searching by prefix instead of substring would get not just the 90% case but probably the 99% case, and I *think* this part has been fixed but before it was doing tsvector, like it was dropping the "@" and treating "@p@fseb" as a search for "p" and "fseb"; I'd have to go looking again.

...Then I ended up looking again.

That code actually has kind of a long path between the HTTP request and the database, like lib/pleroma/activity/search.ex around the query_with()s which, while I'm here, it looks like what I said abut substrings is wrong, because it's doing "to_tsvector('english', ?->>'content') @@ plainto_tsquery('english', ?)". Autocomplete uses the same search endpoint as regular search, so /api/v1/accounts/search?q=asd&resolve=true , and I could have sworn there was a substring bit but I don't see anything like that.

in reply to pistolero

@p @hj user-search is at lib/pleroma/user/search.ex
Which is quite a mess and makes me wish Elixir would have early returns for the case where a @ is present and so where we could assume substring rather than FTS…
in reply to Den Datafag Trollmann :flag:

@hj @lanodan For posts, sure. If we're talking about autocompleting nicknames, though, we have ~750k users, that's 16MB, it's just short of 19MB of JSON, and FSE's database is huge and old. (749,027 users, 17390303 bytes ~ 16.6MB.) It's really easy to cache 16MB and it's probably going to be closer to 2-4MB for a lot of instances. You don't really need to do a full-text search for autocomplete; I think what happens now is a substring search on the nickname and the name, it sends back the entire bundle of account information for everyone, etc. It would be faster to ship the entire list to the user like with emojis and then load the rest of the account information on demand.
in reply to pistolero

@p @hj Yeah for users you could probably store the whole shebang in RAM.
But due to how brain dead MastodonAPI got designed, most searches are combined users+full-text+hashtags (I'd guess hashtags are basically gratis in comparison to full-text).

And shoving the list of users to clients… reminds me that AdminFE somehow manages to almost choke on just loading the list of instances, most of that is likely because the UI was just clobbered together and ending up generating a massive DOM rather than just keeping a JSON in memory but still.