Splade Encoder GPU Support

I am using the pinecone-text library for generating sparse vectors GitHub - pinecone-io/pinecone-text: Pinecone text client library

However generating the sparse vectors via Splade is taking a very long time and i think it is not utilizing the GPU. Has anyone tried to use a GPU for this?

Hi @ashleychan,
To leverage GPU you can simply init the SpladeEncoder as follow:

splade = SpladeEncoder(device="cuda")

or you can dynamically infer if cuda is available by:

import torch

device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
splade = SpladeEncoder(device=device)

I’ll add it to the repo README so it will be clearer.


Thanks I also found out the solution prior to your reply and closed the issue on the repository.

Does this also apply to the BM25Encoder() ? Is there a way to initialize it with GPU or to make the bm25.fit() faster?

Unfortunately currently BM25 encoder have no optimizations. The result json is a simple DF count (a dictionary mapping between a token to the number of tokens it appears in) so theoretically you can parallelize multiple fit calls on distinct shards of your training data and then simply merge the outputs.